paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2019_BkgBvsC9FQ
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., single-modal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses.
accepted-poster-papers
This paper tackles the task of end-to-end systems for dialogue generation and proposes a novel, improved GAN for dialogue modeling, which adopts conditional Wasserstein Auto-Encoder to learn high-level representations of responses. In experiments, the proposed approach is compared to several state-of-the-art baselines on two dialog datasets, and improvements are shown both in terms of objective measures and human evaluation, making a strong support for the proposed approach. Two reviewers suggest similarities with a recent ICML paper on ARAE and request including reference to it and also request examples demonstrating differences, which are included in the latest version of the paper.
train
[ "rJextSEHx4", "ryenU7v_am", "BJlpvEDuTQ", "BkxsExnM6X", "HyghGy8-TQ", "r1xEp0mqnm", "ryxFd-TK2Q" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate all reviewers for constructive feedback and comments for the improvement of the paper.\nWe have revised our paper according to the comments and replied to all reviewers.\nBefore the final decision, we will explain any of your questions.", "We thank the reviewer for taking the time to read our paper and for the useful comments to help improve our presentation! Below we address the specific points raised by the reviewer:\n\n>>>\n1) Missing citation, the optimization problem of this paper (Equation 5) is similar to the Adversarially Regularized Autoencoders (ICML 2018). \n<<<\n\nWe have cited and discussed the ARAE paper in the revision. \n\n>>>\n2) The authors use Gumbel-Softmax re-parametrization to sample an instance for the Gaussian Mixture prior network. Are you using the Straight-Through estimator or the original one?\n<<<\n\nWe use the Strait-Through Gumbel softmax. \n\n>>>\n3) I'd be interested to see some analysis on what each Gaussian model has captured. Will different Gaussian model generate different types of responses? Are the differences interpretable? \n<<<\n\nIn the revised manuscript we have added an example showing responses for each Gaussian component. We select a dialogue context used in a baseline paper (Shen et al. 2018) for analysis and generate 5 responses for each component using DialogWAE-GMP. Results are shown in the following table (Reviewers can also find it from Table 4 in our revised manuscript):\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Context | I would like to invite you to dinner tonight, do you have time?\n-------------|---------------------------------------------------------------------------------------------------------------------------------------\n | Component 1 | Component 2 | Component 3\n |-----------------------------------------|-----------------------------------------------|--------------------------------------------- \n | Eg.1: Yes, I'd {like} to go | Eg.1: I'm {not sure} | Eg.1: {Of course} I'm {not} sure. \n | with you. | Eg.2: I'm {not sure}. | What's the problem? \n | Eg.2: My {pleasure}. | What's the problem? | Eg.2: {No}, I {don't} want to go\nReplies | Eg.3: {OK}, thanks. | Eg.3: I'm sorry to hear that | Eg.3: I want to {go to bed}, but\n | Eg.4: I don't know what to do| What's the problem? | I'm not sure. \n | Eg.5: {Sure}. I'd {like} to | Eg.4: It's very kind of you, too | Eg.4: {Of course not}. you \n | go out | Eg.5: I have {no idea}. You have to| Eg.5: Do you want to go?\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n\nAs shown in the table, different Gaussian models generate different types of responses: component 1 expresses a strong will, while component 2 expresses some uncertainty, and component 3 generates strong negative responses. The overlap between components is marginal (around 1/5). The results indicate that the Gaussian mixture prior network can successfully capture the multimodal distribution of the responses. ", "We thank the reviewer for taking the time to read our paper and for the useful comments to help improve our presentation! Below we address the specific points raised by the reviewer:\n\n>>>\nInterestingly, AARE paper is not cited in this work, which I think is an issue…\n<<<\n\nWe have cited and discussed the ARAE paper in the revision. \n\n>>>\nI would like the authors to comment on the interpretability of the components. Perhaps show a sample from each component (in the end the model decides which modal to choose before generation. Are these GMMs overlapping and how much? Can you measure the difference between the means? \n<<<\n\nIn the revised manuscript we have added an example showing responses for each Gaussian component. We select a dialogue context used in a baseline paper (Shen et al. 2018) for analysis and generate 5 responses for each component using DialogWAE-GMP. Results are shown in the following table (Reviewers can also find it from Table 4 in our revised manuscript):\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Context | I would like to invite you to dinner tonight, do you have time?\n-------------|---------------------------------------------------------------------------------------------------------------------------------------\n | Component 1 | Component 2 | Component 3\n |-----------------------------------------|-----------------------------------------------|--------------------------------------------- \n | Eg.1: Yes, I'd {like} to go | Eg.1: I'm {not sure} | Eg.1: {Of course} I'm {not} sure. \n | with you. | Eg.2: I'm {not sure}. | What's the problem? \n | Eg.2: My {pleasure}. | What's the problem? | Eg.2: {No}, I {don't} want to go\nReplies | Eg.3: {OK}, thanks. | Eg.3: I'm sorry to hear that | Eg.3: I want to {go to bed}, but\n | Eg.4: I don't know what to do| What's the problem? | I'm not sure. \n | Eg.5: {Sure}. I'd {like} to | Eg.4: It's very kind of you, too | Eg.4: {Of course not}. you \n | go out | Eg.5: I have {no idea}. You have to | Eg.5: Do you want to go?\n-----------------------------------------------------------------------------------------------------------------------------------------------------\nAs shown in the table, different Gaussian models generate different types of responses: component 1 expresses a strong will, while component 2 expresses some uncertainty, and component 3 generates strong negative responses. The overlap between components is marginal (around 1/5). The results indicate that the Gaussian mixture prior network can successfully capture the multimodal distribution of the responses. \n", "We thank the reviewer for taking the time to read our paper and for the useful comments to help improve our presentation! Below we address the specific points raised by the reviewer:\n\n>>>\n 3. The usage of the Wasserstein distance in the proposed model does not make sense to me…\n<<<\n\nWe apologize for not making the algorithm clear. In our paper, we consider the Wasserstein Auto-Encoder (WAE) as a special AAE, as what WGAN is to GAN. Therefore, training AAE (i.e., WAE in our algorithm) means minimizing the Wasserstein distance. They are not trained separately at the same time. We will clarify this more clearly in the upcoming revision. \n\n>>>\n 4. To me, the significance of this paper mainly goes to combining several existing frameworks and tricks into the specific area of dialogue generation…\n<<<\n\nFirst, we want to clarify that our core contribution is on the improved GAN architecture for a crucial problem in dialogue modeling rather than a trial-and-error methodology by combining different tricks. \n \nWe agree that our paper's contribution is not geared toward machine learning itself but how it could be used better for dialogue generation, which is considered one of the core challenges in both machine learning, natural language processing, and more broadly artificial intelligence. As is clear from the call-for-papers which states ``applications'' as one of the major subject areas, we believe our work fits the conference's scope well.\n\n>>>\n 5. But a minor concern is that it seems hard to identify which part makes DialogWAE get superior performance than others…\n<<<\n\nExcept for the GAN module, all the models are running with the same experimental setup including the implementation of RNNs. Therefore, the superior performance can be attributed to the improved GAN architecture.\n", "This paper uses Wasserstein GAN in conditional modeling of the dialog response generation. The main goal is to learn to use two network architecture to approximate the posterior distribution of the prior network. Instead of a KL divergence, like in VAE training, they use adversarial training and instead of using softmax output from the discriminator, they use Wasserstein distance. They also introduce a multi-modal distribution, GMM, while sampling from a the posterior during training, prior during the test time. The multi-modal sampling is based on gumbel-softmax over K possible G-distributions. They experiment on Daily Dialog and Switchborad datasets and show promising improvements on qualitative measures like BLEU and BOW embedding similarities, as well as qualitative measures including human evaluations comparing againsts substantial amount of baselines.\n\nThe paper presents a marriage of a few ideas together. First of, it uses the conditional structure presented in the ACL 2017 paper \"Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders\". It's great that they used that paper as their baseline. The extension is to use a GAN objective function (the discriminator) as critic and use Wasserstein GAN to to resolve the gradient vanishing issue and produce smooth gradients everywhere. In ACL 2017 paper they use KL divergence to make the posterior from the prior and rec-networks as close to each other so at test time the prior network can generate the samples similar to the true data features distribution. In this paper instead of KL, they use a Discriminator as in 'Adversarial AutoEncoders' paper. This paper extends AAE, instead uses the Wasserstein distance instead (1-Lipschitz function instead of softmax for the discriminator). The W-GAN has been shown to produce good results in text generation in this year's ICML 2018 with the paper 'Adversarially Regularized GAN' (AARE). The idea was to resolve VAE posterior collapse issue by using a discriminator as a regularizer instead of KL divergence with a stronger sampler from the output of the generator to map from noise sampler into the latent space. Interestingly, AARE paper is not cited in this work, which i think is an issue. I understand that paper was just for generation purpose not specific to the dialog modeling, but it makes the claims in the paper misleading such as: \"Unlike VAE conversation models that impose a simple distribution over latent variables, DialogWAE models the data distribution by training a GAN within the latent variable space\".\n\nThe part that i liked is the fact that they used multimodal gaussian distributions. I agree with the authors that using Gaussian for the approximating distribution only limits the sampling space and can weaken the models capability of variation. Although it is not proven for text, in image, the gaussian posteriors during training converge together into a single gaussian, causing blurry images. In this text this might correspond to dull responses in dialog. I would like the authors to comment on the interpretability of the components. Perhaps show a sample from each component (in the end the model decides which modal to choose before generation. Are these GMMs overlapping and how much ? Can you measure the difference between the means ? \n\nI find the experiments extensive except the datasets are weaker. \nI like the fact that they included human evaluations. \n", "This paper proposes a novel dialogue modeling framework DialogWAE, which adopts conditional Wasserstein Auto-Encoder to learn continuous latent variables z that represents the high-level representation of responses. To enrich the diversity of the latent representations and capture multiple modes in the latent variables, the authors propose an advanced version (DialogWAE-GMP) of DialogWAE and models the prior distribution with a mixture of Gaussian distributions instead one. \n\nStrength: The idea is clear and the paper is very well written. The authors evaluate the proposed models on a variety of reasonable metrics and compare against seven recently-proposed baselines. Results show that both DialogWAE and DialogWAE-GMP generate responses that are both more similar to the references (BLEU and BOW embeddings) and more diverse (inter-dist). Human evaluations also show that the proposed models generate better responses than two representative baselines.\n\nMinor comments/questions: \n\n1) Missing citation, the optimization problem of this paper (Equation 5) is similar to the Adversarially Regularized Autoencoders (ICML 2018). \n\n2) The authors use Gumbel-Softmax re-parametrization to sample an instance for the Gaussian Mixture prior network. Are you using the Straight-Through estimator or the original one? If the original Gumbel-Softmax estimator is used, it is better to show a comparison between simply using the Softmax with Gumbel softmax. Since the discrete sampling is not crucial in this case, a mixture of weighted representation may also work.\n\n3) The DialogWAE-GMP with Gaussian Mixture prior network achieves great evaluation results and is better than the non-mixture version. I'd be interested to see some analysis on what each Gaussian model has captured. Will different Gaussian model generate different types of responses? Are the differences interpretable? ", "This paper presents a dialogue response generation model based on the framework of adversarial autoencoder. Specifically, the proposed model uses an autoencoder to encode and decode a response in a dialogue, conditioning on the context of the dialogue. The RNN encoded context is used as the prior of the latent variable in the autoencoder, and the whole dialogue (context + response) is used to infer the posterior of the latent variable. The inference is done by the adversarial training to match the prior and the posterior of the latent variable. Besides constructing the prior with a single Gaussian, the variant of the proposed model is also proposed where the prior is constructed with a Gaussian mixture model.\n\nMy comments are as follows:\n\n1. The paper is well-written and easy to follow.\n\n2. The experiments seem quite strong and the compared models are properly selected. I'm not an expert in the specific area of the dialogue generation. But to me, the results seem convincing to me. \n\n3. The usage of the Wasserstein distance in the proposed model does not make sense to me. Both the adversarial training in AAE and minimising the Wasserstein distance are able to match the prior and posterior of the latent variable. If the former is used in the proposed model, then how is the Wasserstein distance used at the same time? I also checked Algorithm 1 and did not find how the Wasserstein distance comes in. This is the first question that needs the authors to clarify.\n\n4. To me, the significance of this paper mainly goes to combining several existing frameworks and tricks into the specific area of dialogue generation. Although the empirical results show the proposed model outperforms several existing models, my concern is still on the originality of the paper. Specifically, one of the main contributions goes to using the Gaussian mixture to construct the prior, but this is not a whole new idea in VAE or GAN, nor using the Gumbel trick. \n\n5. It is good to see that the authors showed some comparisons between DialogWAE and DialogWAE-GMP, letting us see GMP does help the performance. But a minor concern is that it seems hard to identify which part makes DialogWAE get superior performance than others. Are all the models running with the same experiment settings including the implementation of the RNNs?" ]
[ -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2019_BkgBvsC9FQ", "r1xEp0mqnm", "HyghGy8-TQ", "ryxFd-TK2Q", "iclr_2019_BkgBvsC9FQ", "iclr_2019_BkgBvsC9FQ", "iclr_2019_BkgBvsC9FQ" ]
iclr_2019_BkgPajAcY7
No Training Required: Exploring Random Encoders for Sentence Classification
We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking at how much modern sentence embeddings gain over random methods---as it turns out, surprisingly little; and by 2) providing the field with more appropriate baselines going forward---which are, as it turns out, quite strong. We also make important observations about proper experimental protocol for sentence classification evaluation, together with recommendations for future research.
accepted-poster-papers
This paper provides a new family of untrained/randomly initialized sentence encoder baselines for a standard suite of NLP evaluation tasks, and shows that it does surprisingly well—very close to widely-used methods for some of the tasks. All three reviewers acknowledge that this is a substantial contribution, and none see any major errors or fatal flaws. One reviewer had initially argued the experiments and discussion are not as thorough as would be typical for a strong paper. In particular, the results are focused on a single set of word embeddings and a narrow class of architectures. I'm sympathetic to this concern, but since there don't seem to be any outstanding concerns about the correctness of the paper, and since the other reviewers see the contribution as quite important, I recommend acceptance. [Update: This reviewer has since revised their review to make it more positive.] (As a nit, I'd ask the authors to ensure that the final version of the paper fits within the margins.)
train
[ "r1glPhYWTX", "H1e7fT0K0X", "HyxhXnAY0X", "rygDbIX90m", "Syx4KoAYCQ", "rJxtPTCtCX", "r1eGVpCKRm", "HygtcSqw6m", "SyeP-oxUa7", "SyxAjuu9h7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper proposes that randomly encoding a sentence using a set of pretrained word embeddings is almost as good as using a trained encoder with the same embeddings. This is shown through a variety of tasks where certain tasks perform well with a random encoder and certain ones don't.\n\nThe paper is well written and easy to understand and the experiments show interesting findings. There is a good analysis on how the size of the random encoder affects performance which is well motivated by Cover's theorem.\n\nHowever, the random encoders that are tested in the paper are relatively limited to random projections of the embeddings, a randomly initialized LSTM and an echo state network. Other comparisons would make the results significantly more interesting and would move away from the big assumption stated in the first sentence, i.e. that sentence embeddings are: \"learned non-linear recurrent combinations\". Some major models that are missed by this include paragraph vectors (which do not require any initial training if initialized with pretrained word embeddings), CNNs and Transformers. Given this, the takeaways from this paper seem quite limited to recurrent representations and it's unclear how it would generalize to other representations.\n\nAn additional problem is that the paper states that ST-LN used different and older word embeddings which may make the comparison flawed when compared with the random encoders. In this case, the only fairly trained sentence encoder that is compared with is InferSent. The RandLSTM also has an issue in that the biases are intialized around zero whereas it's well known that using an initially higher forget gate bias significantly improves the performance of the LSTM.\n\nFinally, the analysis of the results seems weak. The tasks are very different from each other and no reason or potential explanation is given why certain tasks are better than others with random encoders, except for SOMO and CoordInv. E.g. Could some tasks be solved by looking at keywords or bigrams? Do some tasks intrinsically require longer term dependencies? Do some tasks have more data?\n\nOther comments:\n- The results and especially random encoder results should be shown with confidence intervals.\n- Section 3.1.3 the text refers to W^r but that does not appear in any equations.\n\n=== After rebuttal ===\nThanks for adding the additional experiments (particularly with fully random embeddings) and result analyses to the paper. I feel that this makes the paper stronger and have raised my score accordingly.", "Thank you for your review and comments! We have incorporated your feedback into our latest draft.\n\nWe focused on recurrent architectures in this paper because that is the type of network used by the top performing models within in this evaluation framework. Therefore, by using a recurrent model, we capture the prior of these state-of-the-art models and this gives us a better understanding of how much these published approaches benefit from learning. Models like InferSent (Conneau et al. 2017), GenSen (Subramanian et al. 2018), SkipThought (Kiros et al. 2015), Dissent (Nie et al. 2017), Byte mLSTM (Radford et al. 2017), all use recurrent models. While there are some architectures in the literature that use CNNs (like Gan et al. (2016)) they are not among the current state-of-the-art. The point of this work is to provide baselines - which means CNNs and Transformers can be compared to our numbers, and should hopefully be able to beat them. \n\nAnother attractive reason for using recurrent networks is that they have very few hyperparameters to tune, in fact the only hyperparameter we varied was the hidden size in our experiments (and we detailed what size this was in our results). Architectures like CNNs or transformers require more design decisions and do not have a \"default architecture\" which leads to a lot more experimentation and tuning.\n\nRegarding using paragraph vector, it actually is a trained model and the results on these downstream tasks are not very competitive (see Hill et al. 2016 for the numbers). We'd be happy to include it in our results, but we don't think it would add to the message of the paper.\n\nWe do agree that sentence embeddings are more general than \"learned non-linear recurrent combinations\" and have changed this in the current iteration of the paper. Thanks for pointing this out!\n\nWe also agree that the comparison of ST-LN isn't quite as even as we would like, which is why we did make that note in our original submission that ST-LN could potentially be higher if they used GloVe embeddings. The problem with making this comparison is simply that reproducing ST-LN takes about a month of computation time. However, others have experimented with ST-LN with Glove embeddings. Results for this model are in https://arxiv.org/pdf/1707.06320.pdf for example, with an older evaluation setup more comparable to numbers in http://aclweb.org/anthology/Q16-1002. GenSen also experiments with a SkipThought model, and while not initialized with GloVe, they project from GloVe into their embedding space. They actually found this to work better than just using GloVe in their experiments (confirmed through correspondence with the authors). We do compare to the full GenSen model in the appendix, and their version that has just ST is included in their paper. It was one of their baselines which was handily beaten by their full model. So while a direct comparison is tricky, we can safely say that adding GloVe to ST-LN would not elevate the model to a level that would change the message of this paper.\n", "Thank you for all the feedback!\n\nWe have softened the claims about the usefulness of some of the SentEval tasks for evaluation in our paper. We do think these tasks could be useful as evaluations in some situations, and strong performance should definitely be possible for very discriminative sentence embeddings. Our motivation for that take-away was also based on other observations of these tasks (like that they are too sentiment-focused or that they are nearly solved) that have been brought to light in other works.\n\nWe found initialization to matter somewhat in our experiments which is why we were very explicit about how we initialized in our submission, and we have since added some more analysis of this in the paper. In Appendix D, we compare six different initialization schemes (Heuristic (the one used in the paper currently, Uniform, Normal, Orthogonal, He (He et al. 2015), and Xavier (Glorot & Bengio, 2010). We found that BOREP is more robust to initialization than RandLSTM and prefers Orthogonal initialization. RandLSTM performs poorly with Normal initialization (and also Uniform but to a much lesser degree), and seems to perform best with He initialization.\n\nYour idea about using random word embeddings is very interesting! In fact, we added this experiment in the newest version of our paper. We included an analysis of completely random word embeddings along with the random architectures. Like in the initialization experiments, we experimented with six different methods to initialize both the word embeddings and the parameters of the architectures. The experiments are in Appendix E. We also experimented with pooling 4096 dimension embeddings (randomly sampled), which performs very well compared BOREP and RandLSTM. Overall, it really depends on the task how much the pre-trained embeddings help. However, they do seem to help more for tasks measuring semantic similarity (which makes sense since they can make use of unseen embeddings if both unseen embeddings are in the two sentences being compared). For some tasks like MRPC or SICK-E, the difference between using random word embeddings or pretrained ones is small (0.6 and 0.7 respectively), but for others like SST2 or MR it can be pretty large (10 and 8.8 points respectively). The average gain across tasks is 5.4 points.\n\nWe agree that we should have added some more analysis, and we have done so in the latest version of the draft. It's difficult to say what general knowledge IS and ST models have learned and how applicable it is for the downstream tasks. This is the motivation for probing tasks (Adi et al. 2017, Conneau et al. 2018) which help measure this to a degree and show that IS and ST are able to better capture sequential information. Random networks do about as well as these pretrained encoders on tasks that can be solved just based on word content. Therefore, if the downstream tasks rely mostly on word content (or perhaps that and a type of sequential information that is not learned by IS or ST), we would expect the difference between a random encoder and IS/ST to be small.\n\nThank you for your other critiques. We have addressed all of these in the newest version of the paper.\n", "There is some relation. LSH is collection of methods for dimension reduction and is often used for clustering. In contrast, we are increasing the dimension of embeddings in order to provide more features for downstream tasks.", "To all reviewers:\n\nThank you so much for your feedback. We have made both minor (but important) and more substantial improvements to the paper due to the thoughtful feedback you have provided. These larger improvements include experiments with different random initializations for BOREP and RandLSTM; experiments for BOREP, BOE (300 dim), BOE (4096 dim), and RandLSTM with different initializations and random word embeddings; adding standard deviations to the experimental results (and also bolded the top numbers to make interpreting the tables easier); adding more analysis regarding the probing experiments; and we included a detailed analysis of how max pooling over padding affected reported results in various papers. We hope that you find the paper improved and a more interesting read. Thank you again for your comments.\n", "Thank you for your feedback and review! We have added standard deviations for the experiments.\n\nTrying high dimensional word embeddings is a very interesting idea. We did not have the time to implement this before the rebuttal deadline (we would like them to be trained on near the same amount of data as the released GloVe embeddings are), but do plan to try this out as soon as we can. Thank you for this idea! In Appendix E, we did experiment (and compare with BOREP and RandLSTM) with pooling 4096 dimensional random word embeddings. They seem to outperform the other models for the same dimensionality, which provides some evidence that large, trained embeddings could achieve strong performance. It would be interesting to see if BOREP, RandLSTM, and ESN improve as well with using large embeddings.", "Regarding initializing the biases, we don't see the weakness in initializing our biases as we do - we use the standard initialization procedure. If we initialize the forget gate bias as you suggest (from Jozefowicz et al. 2015), we might even get better results. However, the reason for initializing the biases in this way is because of gradient flow during optimization. Since we're not doing any training, it's not that relevant here.\n\nAs per your suggestion, we did update the analysis in the new version, thank you. All of these tasks have the same amount of training/validation/testing data and are balanced so the amount of training data has no effect on performance. It seems that random models do best for tasks requiring picking up on certain words: we can see which tasks these are by looking at how well BOREP does compared to the recurrent models (so WC, Tense, SubjNum, ObjNum are good candidates for this type of task). In these tasks, random models are all very competitive to the trained encoders. If one looks at the tasks where there is the largest difference between ESN and max(IS/ST-LN), which are SOMO, CoordInv, BShift, TopConst, it seems that these all have in common that they do require some sequential knowledge. We say this because the BOREP baseline lags behind the recurrent models significantly for many of these (especially when considering where the majority-vote baseline is) and it also makes sense that this is the case when one looks at the definitions of these tasks. This also makes intuitive sense, as this type of knowledge is much harder to learn and is not provided by just pure word embeddings, and so we'd expect the trained models to have an edge here, which seems to bear out in these experiments. We also added further analysis of various other questions in the appendix. We hope you find the updated and more detailed analysis more to your liking.\n\nWe have added confidence intervals in our latest version, and we changed W^r to W^h. Thanks for pointing these out.\n", "Is there any relationship with Locality Sensitive Hashing?", "This paper is about exploring better baselines for sentence-vector representations through randomly initialized/untrained networks. I applaud the overall message of this paper that we need to evaluate our models more thoroughly and have better baselines. The experimentation is quite thorough and I like that you\n1) explored several different architectures\n2) varied the dimensionality of representations\n3) examine representations with probing tasks in the Analysis section. \n\nMain Critique\n- In your takeaways you say that, “For some of the benchmark datasets, differences between random and trained encoders are so small that it would probably be best not to use those tasks anymore.” I don’t think this follows from your results. Just because current trained encoders do not perform better than random encoders on these tasks doesn’t in itself mean these tasks aren’t good evaluation tasks. These tasks could be faulty for other reasons, but just because we have no better technique than random encoders currently, doesn’t make these evaluation tasks not worthwhile. Perhaps you could further examine what features (n-gram, etc.) it takes to do well on these tasks in order to argue that they shouldn’t be used.\n- In your related work section you say that “We show that a lot of information may be crammed into vectors using randomly parameterized combinations of pre-trained word embeddings: that is, most of the power in modern NLP systems is derived from having high-quality word embeddings, rather than from having better encoders.” Did you run experiments with randomly initialized embeddings? This paper (https://openreview.net/forum?id=ryeNPi0qKX) finds that representations from LSTMs with randomly initialized embeddings can perform quite well on some transfer tasks. I think in order to make such a claim about the power of high-quality word embeddings you should include numbers comparing them to randomly initialized embeddings.\n\nQuestions\n- Did you find that your results were sensitive to the initialization technique used for your random LSTMs / projections?\n- Do you have a sense of why random non-linear features are able to perform well on these tasks? What kind of features are the skip-thought and InferSent representations learning if they do not perform much better? It’s interesting that many of the random encoder methods outperform the trained models on word content. I think you could discuss these Analysis section findings more.\n\nOther Critiques\n- In the introduction, instead of simply describing what is commonly done to obtain and evaluate sentence embeddings, it would be better to include a sentence or two about the motivation for sentence embeddings at all.\n- The first sentence, “Sentence embeddings are learned non-linear recurrent combinations of pre-trained word embeddings”, doesn’t seem to be true as BOE representations are also sentence embeddings and CNNs/transformers could also work. “Non-linear” and “recurrent” are not inherent requirements for sentence embeddings, but just techniques that researchers commonly use.\n- In the second paragraph of introduction instead saying “Natural language processing does not yet have a clear grasp on the relationship between word and sentence embeddings…” it might be better to say “NLP researchers” or the “NLP community” instead of “NLP” as a field doesn’t have a clear grasp.\n- In the introduction: “It is unclear how much sentence-encoding architectures improve over the raw word embeddings, and what aspect of such architectures is responsible for any improvement.” It would be also good to mention that it’s unclear how much the training task / procedure also is affects improvements.\n- You could describe more about applications of reservoir computing in your related work section as it’s been used in NLP before.\n- I don’t think you actually ever describe the type of data that InferSent is trained on, only that it is “expensive” annotated data. It might be useful to add a sentence about natural language inference for clarity.\n- In the conclusion, change “performance improvements are less than 1 and less than 2 points on average over the 10 SentEval tasks, respectively” to “performance improvements are less than 2 percentage points on average over the 10 SentEval tasks, respectively”\n- It would be nice if you bolded/underlined the best performing numbers in your results tables.\n", "This paper tests a number of untrained sentence representation models - based on random embedding projections, randomly-initialized LSTMs, and echo state networks - and compares the outputs of these models against influential trained sentence encoders (SkipThought, InferSent) on transfer and probing tasks. The paper finds that using the trained encoders yields only marginal improvement over the fully untrained models.\n\nI think this is a strong paper, with a valuable contribution. The paper sheds important light on weaknesses of current methods of sentence encoding, as well as weaknesses of the standard evaluations used for sentence representation models - specifically, on currently-available metrics, most of the performance achievements observed in sentence encoders can apparently be accomplished without any encoder training at all, casting doubt on the capacity of these encoders - or existing downstream tasks - to tap into meaningful information about language. The paper establishes stronger and more appropriate baselines for sentence encoders, which I believe will be valuable for assessment of sentence representation models moving forward. \n\nThe paper is clearly written and well-organized, and to my knowledge the contribution is novel. I appreciate the care that has been taken to implement fair and well-controlled comparisons between models. Overall, I am happy with this paper, and I would like to see it accepted. \n\nAdditional comments:\n\n-A useful addition to the reported results would be confidence intervals of some kind, to get a sense of the extent to which the small improvements for the trained encoders are statistically significant.\n\n-I wonder about how the embedding projection method would compare to simply training higher-dimensional word embeddings from the start. Do we expect substantial differences between these two options?" ]
[ 7, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_BkgPajAcY7", "r1glPhYWTX", "SyeP-oxUa7", "HygtcSqw6m", "iclr_2019_BkgPajAcY7", "SyxAjuu9h7", "H1e7fT0K0X", "iclr_2019_BkgPajAcY7", "iclr_2019_BkgPajAcY7", "iclr_2019_BkgPajAcY7" ]
iclr_2019_BkgWHnR5tm
Neural Graph Evolution: Towards Efficient Automatic Robot Design
Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search space and the difficulty in evaluating the found candidates. To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space. We propose Neural Graph Evolution (NGE), which performs selection on current candidates and evolves new ones iteratively. Different from previous approaches, NGE uses graph neural networks to parameterize the control policies, which reduces evaluation cost on new candidates with the help of skill transfer from previously evaluated designs. In addition, NGE applies Graph Mutation with Uncertainty (GM-UC) by incorporating model uncertainty, which reduces the search space by balancing exploration and exploitation. We show that NGE significantly outperforms previous methods by an order of magnitude. As shown in experiments, NGE is the first algorithm that can automatically discover kinematically preferred robotic graph structures, such as a fish with two symmetrical flat side-fins and a tail, or a cheetah with athletic front and back legs. Instead of using thousands of cores for weeks, NGE efficiently solves searching problem within a day on a single 64 CPU-core Amazon EC2 machine.
accepted-poster-papers
Lean in favor Strengths: The paper tackles the difficult problem of automatic robot design. The approach uses graph neural networks to parameterize the control policies, which allows for weight sharing / transfer to new policies even as the topology changes. Understanding how to efficiently explore through non-differentiable changes to the body is an important problem (AC). The authors will release the code and environments, which will be useful in an area where there are currently no good baselines (AC). Weaknesses: There are concerns (particularly R2, R1) over the lack of a strong baseline, and with the results being demonstrated on a limited number of environments (R1) (fish, 2D walker). In response, the authors clarified the nomenclature and description of a number of the baselines, and added others. AC: there is no submitted video (searches for "video" on the PDF text produces no hits); this is seen by the AC as being a real limitation from the perspective of evaluation. AC agrees with some of the reviewer remarks that some of the original stated claims are too strong. AC: the simplified fluid model of Mujoco (http://mujoco.org/book/computation.html#gePassive) is unable to model the fluid state, in particular the induced fluid vortices that are responsible for a good portion of fish locomotion, i.e., "Passive and active flow control by swimming fishes and mammals" and other papers. Acknowledging this kind of limitation will make the paper stronger, not weaker; the ML community can learn from much existing work at the interface of biology and fluid mechancis. There remain points of contention, i.e., the sufficiency of the baselines. However, the reviewers R2 and R3 have not responded to the detailed replies from the authors, including additional baselines (totaling 5 at present) and pointing out that baselines such as CMA-ES (R2) in a continuous space and therefore do not translate in any obvious way to the given problem at hand. On balance, with the additional baselines and related clarifications, the AC feels that this paper makes a useful and valid contribution to the field, and will help establish a benchmark in an important area. The authors are strongly encouraged to further state caveats and limitations, and to emphasize why some candidate baseline methods are not readily applicable.
train
[ "Bkl1xB-3JE", "B1xqGNaoyV", "SJgfTweYk4", "r1l6OwV-AQ", "r1lAJc6xRm", "rkeEvt6xA7", "BkeBMdpg07", "S1x7SdplAX", "SygmVm7XpQ", "HJg1qgpZTm", "rJxUPOqhhX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We respect the reviewer's opinion and thanks again for the response. But still, we disagree with the claim that the experiment part is weak.\n\nIn terms of the quality of baselines, we already include 5 comparing baselines including previous state-of-the-art. And NGE has the best performance and efficiency by a large margin (2x of previous state-of-the-art).\n\nThe problem is novel / under-explored and there is no existing benchmark.\nWe put in significant efforts to design 2 structure search environments and 3 fine-tuning environments, which requires weeks (even months) of engineering (robotics xml parser, graph xml generator, forward-kinematics, states mapping, etc.). We will release the code and environments after the reviewing period.\n\nWe argue the evaluation of research should not be constrained by the number of experiments. And more focus can be paid on the novelty of algorithms and the inspiration that can be brought to the community.\n\nWe would like to emphasize that our experiments show that, in the high-fidelity simulation like MuJoCo (previous research is conducted either in 2D environments or with simplified self-made engine), no previous approach can efficiently search for athletic walker or swimmer structures.\nUnlike the previous approaches that optimize the graph and the controllers separately, our proposed method jointly optimize discrete graph structure and the continuous controller parameters at the same time. Our joint optimization is a novel formulation, and effective approach that outperforms all the other baseline methods. \n\nThis paper lies in the intersection of graph learning, reinforcement learning, robotics and structure search. Although it is a small step towards automatic robot structure search, we believe it will inspire following work in robotics, graph generation and neural architecture search.\n", "The response makes the paper clearer. The added comparisons are interesting, although they could be more in depth. I keep my response as it was, due to the interesting proposed approach, and the obtained results.", "Thank you for updating the paper with correct axis labels. Overall, I still feel the experiment section is very weak and the results are only shown in a few selected environments. Hence, I keep my review to be same, i.e., 6.", "Thanks to all for the detailed reviews and corresponding responses.\n\nA revised version has been posted. There is also a useful \"Compare Revisions\" choice when you get to the Revisions page.\n\nIt would be good to hear from the reviewers if their concerns have been addressed, and if they are going to make any score revisions. There is still some disparity, mainly surrounding the experimental evaluation.\n\nmany thanks (area chair)", "We thank the reviewers for their response and suggestions. We have updated the paper and summarized the modifications here based on their feedback.\n\n1. The abbreviation for “evolutionary structure search” is now changed from “ES” to “ESS” to reduce ambiguity. “ES” is abbreviated for “evolutionary search” and “evolutionary structure search” simultaneously in our original submission. \n\n2. We rename “Graph Mutation (GM)” into “Graph Mutation with Uncertainty (GM-UC)”.\n\n3. We added additional baselines from previous literature to benchmark the performance of our algorithm, and show that our proposed algorithm has significant improvement both quantitatively and qualitatively.\n\nIn particular, we added the following baselines:\n\na. ESS-Sims (Sims, 1994)\nThis method was proposed in (Sims, 1994), and applied in (Cheney, 2014), (Taylor, 2017), which has been the most classical and successful algorithm in automatic robotic design.\nIn the original paper, the author used evolutionary strategy to train a human-engineered one-layer neural network. With the recent progress of the robotics and reinforcement learning, we replaced the network with a 3-layer MLP and trained it with PPO instead of evolutionary strategy.\n\nb. ESS-Sims-AF\nIn the original (Sims, 1994), amortized fitness is not used.\nAlthough amortized fitness could not be applied in ESS since the shape of network parameters is changing, amortized fitness could be applied among agents with the same topology. We named this variant of ESS-Sims as “ESS-Sims-AF”.\nThis algorithm is essentially the old “ES” baseline in the earliest revision of the paper.\n\nc. ESS-GM-UC\n“ESS-GM-UC” is a variant of “ESS-Sims-AF” combined with Graph Mutation with Uncertainty. We would also want to explore how will GM-UC affect the performance without the use of structured model like GNN.\n\nd. ESS-BodyShare\nWe would also want to answer the question of whether the graph neural network is needed.\nAs suggested by Reviewer 3, besides unstructured models like fully-connected network, we designed a structured model by removing the message propagation mode and named it “ESS-BodyShare”\n\ne. RGS (random graph search)\nThe same baseline as described in the earlier revision.\n\nThe final performance the NGE and baselines are now shown in Figure 2 in the latest revision, which we summarize as the following table. \n\n\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\nFish | ** 70.21 ** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\nWalker | ** 4157.9 ** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\n\nThe results show that NGE is significantly better than previous approaches and baselines. \n\n4. We improved the writing of the paper.\nIn particular, we added more literature review on related work as requested by the reviewers.\nAnd we re-organized the writing of section 3.1, 3.2, 3.4, so that it is easier to understand and cause less confusion.\n\nSims, 1994, \"Evolving virtual creatures.\" Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994.\n\nCheney, 2014, et al. \"Unshackling evolution: Evolving soft robots with multiple materials and a powerful generative encoding.\" ACM SIGEVOlution 7.1 (2014): 11-23.\n\nTaylor, 2017. \"Evolution in virtual worlds.\" arXiv preprint arXiv:1710.06055 (2017).", "We are afraid that there seems to be some confusion regarding our paper. We apologize if this is caused by the lack of clarity in the use of abbreviation “ES” (see general response). In the latest revision, “Evolutionary structure search” is abbreviated as “ESS” for clarity. We emphasize that in the paper, NO “evolutionary strategy” but rather PPO is used to train the policy (see Section 2.1 and 3.2).\n\nWe hope the reviewer can take time to revisit the paper in the light of this inconsistency. Also, we now have 5 baselines from previous research and modern variants, which we believe further showcases our contributions.\n\nQ1: The experiments do not include any strong baseline\n\nWe added more baselines to further strengthen the significance of our work with respect to the previous approaches.\n\nThe baselines now include (a)“ESS-Sims” (Sims, 1994), (Cheney, 2014), (Taylor, 2017), (b) ESS-Sims-AF, (c) ESS-GM-UC, (d) ESS-BodyShare and (5) Random graph search. We refer to the details of each baseline in the general response.\n\n\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\nfish | **70.21** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\nWalker | **4157.9** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\n\nThe results show that NGE is significantly better than previous approaches and baselines. We did an ablation study by sequentially adding each sub-module of NGE separately. The table shows that submodules are effective and increase the performance of graph search.\n\nQ2: a) Optimizing both the controller and the hardware has been previously studied in the literature. Is it worth using a neural graph? b) All algorithms should optimize both G and theta for a fair comparison.\n\nBy “optimizing both G and theta”, we meant to indicate that the learned controllers can be transferred to the next generation even if the topologies are changed (instead of throwing away old controllers). We note that only NGE among all the baselines has the ability to do that. Graph neural network formulation is KEY here, enabling it to perform this efficient policy transfer.\nTo the best of our knowledge, the traditional methods require re-optimizing theta from scratch for each different topology, which is computationally demanding and breaks the joint-optimization. \nNGE approximately doubles the performance of previous approach (Sims, 1994) as shown in Q1.\n\nPlease refer to Section 3.1 and Section 3.4 for more details.\n\nQ3: You should use an existing ES implementation (e.g., from some well-known package) instead of a naive implementation, and as additional baseline also CMA-ES.\n\nAgain, we apologize for the confusing use of “ES” abbreviation. Evolutionary strategy is not used in the paper. We invite the reviewer to re-read our paper, since it seems to have led to a major misunderstanding.\nCMA-ES updates and utilize the covariance matrix of sampling distribution, which is not directly applicable to discrete structure optimization. We believe it will be a valuable future research direction.\n\nQ4: Providing the same computational budget seem rather arbitrary and depends on implementation.\n\nWe are unsure what the reviewer is indicating, and would appreciate the additional clarification.\nIn terms of the computational budget for each experiment, we compared different algorithms under different computational budget metrics, more specifically, “wall-clock time”, “number of updates”, and the “final converged performance”. NGE performs best among all algorithms.\nWe emphasize the fact that wall-clock time is a more common and realistic metric for comparing the structure search in practice. \n\nWe agree that computational budget depends on implementation, and the curves in the paper are plotted based on the number of iterations/parameter update, which is independent of the implementation.\n\nQ5: The writing of the paper\n\nWe sincerely thank the reviewer for the suggestions. We updated the changes in the latest version accordingly.", "We thank the reviewer for the suggestions.\n\nQ1: Robot design were explored in (Sims, 1994) etc. The novelty of the paper is fairly incremental.\n\nWe respectfully disagree and believe our contributions are significant. We note that only NGE among all the baselines has the ability to optimize both the graph G and the controller parameters. Graph neural network formulation is KEY here, enabling it to perform this efficient policy transfer. To the best of our knowledge, the traditional methods (such as (Sims, 1994)) require re-optimizing parameters of the controllers from scratch for each different topologies, which is computationally demanding and breaks the joint-optimization. \n\nTo further showcase our work with respect to prior art, we added (Sims, 1994) as an additional baseline in the latest revision. We refer the reviewer to the general response for details. NGE has about 2x performance of (Sims, 1994) in both fish and walker environments. Moreover, we argue the videos of (Sims, 1994) might be confusing as it mixes the results of policy evolution from human-designed robots and structure evolution. \n\nQ2: Can it be applied to more complex morphologies? Humanoid etc. maybe?\nNGE can be applied to evolve humanoids, however, there are two major difficulties in doing that in practice.\n1. Training humanoid controllers is of orders of magnitude more difficult than training cheetah (Schulman, 2017).\n2. To evolve realistic humanoid structure (e.g. hands, symmetrical limbs), one would need to have more realistic environments that better reflect tasks and complexity in the real world.\nHowever, we agree that this is a very interesting direction for the future.\n\nQ3: Comparison to more baseline, for example models with no message passing.\n\nWe thank the reviewer for pointing out the baseline of no message passing in GNN, which we named as ESS-BodyShare. \n\nIn the latest revision, we have 5 baselines from previous research and modern variants, which further showcases the significance of our work. In general, NGE has significant improvement both quantitatively and qualitatively. We refer the reviewer to the general response for further information.\n\nSpecifically for ESS-BodyShare baseline:\n\t | NGE | ESS-BodyShare\nfish | 70.21 | 54.97 (78.3% of NGE) \nWalker | 4157.9 | 2185.1 (52.5% of NGE)\n\nIn environment where global information is needed (for example, walker with multiple rigid body contact), the performance is jeopardized. But in an easier environment, message passing is less needed.\n\n\nQ4: Clarification of Figure-4 (Section-4.2)\n\nOur aim was to show that in the case where the human-engineered topology needs to be preserved, it is better to co-evolve the attributes and controllers with NGE rather than only training the controllers (controllers are trained from scratch for both NGE and baselines).\n\nThe x-axis was scaled according to the number of updates. We apologize for the lack of clarity. We revised the x-axis from “generations” to parameter “updates” in the latest revision.\n\nIn the latest revision, we also included the curve where the topologies are allowed to be changed, which leads to better performance, but does not necessarily preserve the initial structure.\n\nSchulman, 2017. \"Proximal policy optimization algorithms.\" arXiv preprint arXiv:1707.06347 (2017).", "We thank the reviewer for the reading and suggestions of our paper.\n\nQ1: The exact difference between the proposed method and the ES baseline is not as clear as it could be.\n\nWe agree and apologize for the lack of clarity in some parts of our paper. We renamed all the models based on the original papers and their properties. We refer the reviewer to general response for further details of each baseline algorithms.\nWe also improved clarity in the revised version.\n\nQ2: The second point is that the proposed approach seems to modify a few things from the ES baseline.\n\nWe thank the reviewer for the insightful suggestion. In the latest version, to test the efficacy of each submodule of NGE, the baselines now include the algorithm with the inclusion of the pruning step, and the algorithms with AF and without AF using MLP.\n\nMore specifically, the baselines are named:\n\n1. ESS-Sims\nIt is the baseline algorithm without the use of AF, as use by (Sims, 1994), (Cheney, 2014) and (Taylor, 2017).\n2. ESS-Sims-AF\nThe modern variant of ESS-Sims with the inclusion of AF.\n3. ESS-GM-UC\nThe modern variant of ESS-Sims with the inclusion of AF and graph mutation with uncertainty (pruning).\nFor this baseline, we included the pruning module on top of ESS-Sims-AF. Similar to the original baselines available, we performed a grid search of hyperparameters and plot the average performance of the best set of hyperparameters.\n\n\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\nfish | **70.21** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\nWalker | **4157.9** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\n\nNotice that GM-UC has a lower performance gain with the fully-connected network (ESS-Sims) than with GNN. We speculate that this happens in ESS-Sims because the controller is less dependent on the graph structure, and thus the fitness does not well capture the information about the topology. Thus, GM-UC is not able to extract as much information as with GNN.\n\nOn the other hand, the use of AF can greatly affect the performance. The previous approach ESS-Sims can only get 38.32 / 1804 average final reward for fish and walker, respectively. The performance of walker is even very close to random graph search with no evolution. With the help of AF, the performance increases from 38.32 to 51.24 and 1804.4 to 2486.9, respectively.\n", "This paper proposes an approach for automatic robot design based on Neural graph evolution.\nThe overall approach has a flavor of genetical algorithms, as it also performs evolutionary operations on the graph, but it also allows for a better mechanism for policy sharing across the different topologies, which is nice.\n\nMy main concern about the paper is that, currently, the experiments do not include any strong baseline (the ES currently is not a strong baseline, see comments below). \nThe experiments currently demonstrate that optimizing both controller and hardware is better than optimizing just the controller, which is not surprising and is a phenomenon which has been previously studied in the literature.\nWhat instead is missing is an answer to the question: Is it worth using a neural graph? what are the advantages and disadvantages compared to previous approaches?\nI would like to see additional experiments to answer this questions.\n\nIn particular, I believe that any algorithms you compare against, you should optimize both G and theta, since optimizing purely the hardware is unfair.\nYou should use an existing ES implementation (e.g., from some well-known package) instead of a naive implementation, and as additional baseline also CMA-ES. \nIf you can also compare against one or two algorithms of your choice from the recent literature it would also give more value to the comparison.\n\nDetailed comments:\n- in the abstract you say that \"NGE is the first algorithm that can automatically discover complex robotic graph structures\". This statement is ambiguous and potentially unsupported by evidence. how do you define complex? that can or that did discover?\n- in the introduction you mention that automatic robot design had limited success. This is rather subject, and I would tend to disagree. Moreover, the same limitations that apply to other algorithms to make them successful, in my opinion, apply to your proposed algorithm (e.g., difficulty to move from simulated to real-world).\n- The digression at the bottom of the first page about neural architecture search seem out of context and interrupts the flow of the introduction. What is the point that you are trying to make? Also, note that some of the algorithms that you are citing there have indeed applied beyond architecture search, eg. Bayesian optimization is used for gait optimization in robotics, and Genetic algorithms have been used for automatic robot design.\n- The stated contributions number 3 and 5 are not truly contributions. #3 is so generic that a large part of the previous literature on the topic fall under this category -- not new. #5 is weak, and tell us more about the limitations of random search and naive ES than necessarily a merit of your approach. \n- Sec 2.2: \"(GNNs) are very effective\" effective at what? what is the metric that you consider?\n- Sec 3 \"(PS), where weights are reused\" can you already go into more details or refer to later sections?\n- First line page 4 you mention AF, without introducing the acronym ever before.\n- Sec 3.1: the statements about MB and MF algorithms are inaccurate. Model-based RL algorithms can work in real-time (e.g. http://proceedings.mlr.press/v78/drews17a/drews17a.pdf) and have been shown to have same asymptotic performance of MB controllers for simple robot control (e.g. https://arxiv.org/abs/1805.12114) \n- \"to speed up and trade off between evaluating fitness and evolving new species\" Unclear sentence. speed up what? why is this a trade-off?\n- Sec 3.4 can you recap all the parameters after eq.11? going through Sec 3.2 and 2.2 to find them is quite annoying.\n- Sec 4.1: would argue that computational cost is rarely a concern among evolutionary algorithms. The cost of evaluating the function is typically more pressing, and as a result it is important to have algorithms that can converge within a small number of iterations/generations.\n- Providing the same computational budget seem rather arbitrary at the moment, and it heavily depends from implementation. How many evaluations do you perform for each method? why not having the same budget of experiments? ", "This paper discusses the optimization of robot structures, combined with their controllers. The authors propose a scheme\nbased on a graph representation of the robot structure, and a graph-neural-network as controllers. The experiments show\nthat the proposed scheme is able to produce walking and swimming robots in simulation. The results in this paper are impressive, and the paper seems free of technical errors. \n\nThe main criticism I have is that I found the paper harder to read. In particular, the exact difference between the proposed method and the ES baseline is not as clear as it could be. This makes the contribution of this paper in terms of the method\nhard to judge. Please include further description of the ES cost function and algorithm in the main body of the paper.\n\nThe second point is that the proposed approach seems to modify a few things from the ES baseline. The efficacy of the separate modifications should be tested. Therefore I would like to see experiments with the ES cost function, but with\ninclusion of the pruning step, and experiments with the AF-function but without the pruning step.", "[Summary]:\nThis paper tackles the problem of automatic robot design. The most popular approach to doing this has been evolutionary methods which work by evolving morphology of agents in a feed-forward manner using a propagation and mutation rules. This is a non-differentiable process and relies on maintaining a large pool of candidates out of which best ones are chosen with the highest fitness. In robot design for a given task using rewards, training each robot design using RL with rewards is an expensive process and not scalable. This paper uses graph network to train each morphology using RL. Thereby, allowing the controller to share parameters and reuse information across generations. This expedites the score function evaluation improving the time complexity of the evolutionary process.\n\n[Strengths]:\nThis paper shows some promise when graph network-based controllers augmented with evolutionary algorithms. Paper is quite easy to follow.\n\n[Weaknesses and Clarifications]:\n=> Robot design area has been explored extensively in classical work of Sims (1994) etc. using ES. Given that, the novelty of the paper is fairly incremental as it uses NerveNet to evaluate fitness and ES for the main design search.\n=> Environment: The experimental section of the paper can be further improved. The approach is evaluated only in three cases: fish, walker, cheetah. Can it be applied to more complex morphologies? Humanoid etc. maybe?\n=> Baselines: The comparison provided in the paper is weak. At first, it compares to random graph search and ES. But there are better baselines possible. One such example would be to have a network for each body part and share parameters across each body part. This network takes some identifying information (ID, shape etc.) about body part as input. As more body parts are added, more such network modules can be added. How would the given graph network compare to this? This baseline can be thought of a shared parameter graph with no message passing.\n=> The results shown in Figure-4 (Section-4.2) seems unclear to me. As far as I understand, the model starts with hand-engineered design and then finetuned using evolutionary process. However, the original performance of the hand-engineered design is surprisingly bad (see first data point in any plot in Figure-4). Does the controller also start from scratch? If so, why? Also, it is not clear what is the meaning of generations if the graph is fixed, can't it be learned altogether at once?\n\n[Recommendation]:\nI request the authors to address the comments raised above. Overall, this is a reasonable paper but experimental section needs much more attention." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SJgfTweYk4", "S1x7SdplAX", "BkeBMdpg07", "iclr_2019_BkgWHnR5tm", "iclr_2019_BkgWHnR5tm", "SygmVm7XpQ", "rJxUPOqhhX", "HJg1qgpZTm", "iclr_2019_BkgWHnR5tm", "iclr_2019_BkgWHnR5tm", "iclr_2019_BkgWHnR5tm" ]
iclr_2019_BkgtDsCcKQ
Function Space Particle Optimization for Bayesian Neural Networks
While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature. To address this issue, several highly flexible and scalable variational inference procedures based on the idea of particle optimization have been proposed. These methods directly optimize a set of particles to approximate the target posterior. However, their application to BNNs often yields sub-optimal performance, as such methods have a particular failure mode on over-parameterized models. In this paper, we propose to solve this issue by performing particle optimization directly in the space of regression functions. We demonstrate through extensive experiments that our method successfully overcomes this issue, and outperforms strong baselines in a variety of tasks including prediction, defense against adversarial examples, and reinforcement learning.
accepted-poster-papers
Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
train
[ "SJgtpDLf1E", "HJlcUPIfkV", "ByeUSNVqCX", "Hyx6dqfunQ", "HyloUTy5nQ", "HJlsKYptRX", "SylRwHjQAm", "SJlJ9HiQCQ", "Hyxehfi70Q", "H1lfE7o7Cm", "BJeUUXsQ0X", "SklEAZjXR7", "rJepoAru2m" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Thanks for acknowledging our contribution and revising the rating.", "Thank you for acknowledging our contributions, revising the ratings and the nice comments. We appreciate that. Below, we briefly address the extra questions.\n\nQ1: Regarding the impact of B' on performance, and values we have used in the experiments\nA1: We will make them clearer in the final version. In fact, (1) we did not tune B' thoroughly, and only experimented with B'<=B/2; inside this range, increasing B' improves predictive performance, although improvement becomes marginal when B' is large. E.g. on the Concrete dataset in Section 5.2.1, setting B'=100 improves the average NLL by 0.04 compared to B'=10, and by 0.08 compared to B'=0; the standard deviation of NLL on this dataset is 0.04.\n(2) In the synthetic experiment in Section 5.1, varying B' does not have a significant impact on the quality of uncertainty estimation. A possible explanation is that the smoothness constraint encoded in the function-space prior \"propagates\" uncertainty in q[f(X_{train})] to q[f(x)] for nearby x.\n(3) The value of B-B' is given in the text. For all experiments on feed-forward networks, we have set B' to min(100, B/2). This value is determined by grid search (in {1, 10, 100}) on a UCI regression dataset. In the ResNet experiment we used B'=4, and we did not experiment with other values.\n\nQ2: On downsampling f(x) in classification\nA2: We apologize that the phrase \"for a single data point x\" might be misleading here. We will make it clearer in the final version. In fact, denote the batch size (used for prior estimation) as B and number of classes as C, then without down-sampling, we will need to approximate the prior distribution of a B*C dimensional vector, the concatenated function values for all data points in the batch. This is high-dimensional compared to the 1d-regression case, where the dimension of the concatenated function values is B.\nSo we choose to down-sample the indices of this vector, i.e. to down-sample the set of classes for which we will take the corresponding logits, concatenate them for all data points in a batch, and estimate the prior distribution. It is a type of stochastic approximation similar to mini-batching, and is not necessary: alternatively we can use a smaller B.\n\nQ3-i: On the accuracy on clean data in the adversarial robustness experiment:\nA3-i: We will clarify this in more detail in the final version. As widely observed in the literature (e.g., Liao et al., 2017), it is common to sacrifice some (often tiny) performance on clean samples in order to obtain significant improvement in adversarial robustness. For BNNs, under certain priors, the posterior mean estimate could be sub-optimal on iid test samples, compared to an ensemble of MAP estimates; and it is possible the prior we used in the ResNet experiment falls into this case. However, such a prior can still be useful for adversarial defense, because, as we have reviewed in Section 5.3, the latter task involves prediction on uncertain inputs. As we focused on adversarial robustness, we did not adjust the prior specifically to optimize accuracy on clean data, and used the Gaussian prior corresponding to the original L2 regularizer in ResNet instead. We leave the search of a more sensible prior as future work.\n\nQ3-ii: On the possibility to adjust \\mu for robustness applications:\nA3-ii: Thanks for the suggestion. We agree that it will be an interesting direction of future work. E.g. we could add to \\mu a component that focuses on the regions of adversarial examples, similar to what we proposed for domain adaptation; and to improve performance on general CV tasks, in KDE we could specify better kernels than isotropic Gaussian kernels. It is possible that a careful adaptation of our method could further improve adversarial robustness, or performance on general CV tasks.", "Thank you for the detailed response. The clarity of the paper has been improved significantly and all my questions have been reasonably addressed. I have upgraded my rating accordingly.", "PAPER SUMMARY:\n\nThis paper proposes a new POVI method for posterior inference in BNN. Unlike existing POVI techniques that optimize particles in the weight space which often yields sub-optimal results on BNN due to its over-parameterized nature, the new POVI method aims to maintain and update particles directly on the space of regression functions to overcome this sub-optimal issue.\n\nNOVELTY & SIGNIFICANCE:\n\nIn general, I am inclined to think that this paper has made an important contribution with very promising results but I still have doubts in the proposed solution technique (as detailed below) and am not able to converge to a final rating at this point.\n\nTECHNICAL SOUNDNESS:\n\nThe authors claim that the new POVI technique operates directly on the function-space posterior to sidestep the over-parameterized issue of BNN but ultimately each function particle is still identified by a weight particle (as detailed in Eq. (2)). In terms of high-level ideas, I am not sure I understand the implied fundamental differences between this work and SVGD and how significant is it.\n\nOn the technical level, the key difference between the proposed work and SVGD seems to be the particle update equation in (2): The gradient flow is multiplied with the derivative of the BNN evaluated at the corresponding weight particle (in SVGD, the gradient flow was used alone). The authors then mentioned that this update rule results from minimizing the difference between f(X, theta) and f(X, theta) + \\epsilon * v(f(., theta))(X). I do not follow this step -- please elaborate.\n\nThe theoretical justification that follows Eq. (3) is somewhat incoherent: What is \\Epsilon(q(f(x)))? This has not been defined before or anywhere in the main text. Furthermore, the paragraph that follows the theoretical justification implies the computation of the gradient flow in (3) involves the likelihood term -- why is that?\n\nIn Algorithm 1, why do we sample from both the training set and some measure \\mu? I am sure there must be a reason for this but I could not find it anywhere except for a short statement that \"for convenience, we choose \\mu in such a way that samples from \\mu always consists a mini-batch from X\". Please elaborate.\n\nWill the proposed POVI converge?\n\nCLARITY:\n\nI think this paper has clarity issue with the technical exposition. The explanation tends to be very limited and even appear coherent at important points. For example, see\nmy 3rd point above. \n", "Based on the revision, I am willing to raise the score from 5 to 7.\n\n========================================== \n\nThe authors address the problems of variational inference in over-parameterized models and the problem of the collapse of particle-optimization-based variational inference methods (POVI). The authors propose to solve these problems by performing POVI in the space of functions instead of the weight space and propose a heuristic approximation to POVI in function spaces.\n\nPros:\n1) I believe that this work is of great importance to the Bayesian deep learning community, and may cause a paradigm shift in this area.\n2) The method performs well in practice, and alleviates the over-parameterization problem, as shown in Appendix A.\n3) It seems scalable and easy to implement (and is similar to SVGD in this regard), however, some necessary details are omitted.\n\nCons:\n1) The paper is structured nicely, but the central part of the paper, Section 3, is written poorly; many necessary details are omitted.\n2) The use of proposed approximations is not justified\n\nIn order to be able to perform POVI in function-space, the authors use 4 different approximations in succession. The authors do not check the impact of those approximations empirically, and only assess the performance of the final procedure. I believe it would be beneficial to see the impact of those approximations on simple toy tasks where function-space POVI can be performed directly. Only two approximations are well-motivated (mini-batching and approximation of the prior distribution), whereas the translation of the function-space update and the choice of mu (the distribution, from which we sample mini-batches) are stated without any details.\n\nMajor concerns:\n1) As far as I understand, one can see the translation of the function-space update to the weight-space update (2) as one step of SGD for the minimization of the MSE \\sum_x (f(x; \\theta^i) - f^i_l(x) - \\eps v(f^i_l)(x))^2, where the sum is taken over the whole space X if it is finite, or over the current mini-batch otherwise. The learning rate of such update is fixed at 1. This should be clearly stated in the paper, as for now the update (2) is given without any explanation.\n\n2) I am concerned with the theoretical justification paragraph for the update rule (3) (mini-batching). It is clear that if each marginal is matched exactly, the full posterior is also exactly matched. However, it would usually not be possible to match all marginals using parametric approximations for f(x). Moreover, it is not clear why would updates (3) even converge at all or converge to the desired point, as it is essentially the update for an optimization problem (minimization of the MSE done by SGD with a fixed learning rate), nested into a simulation problem (function-space POVI). This paragraph provides a nice intuition to why the procedure works, but theoretical justification would require more rigor.\n\n3) Another approximation that is left unnoted is the choice of mu (the distribution over mini-batches). It seems to me from the definition of function-space POVI that we need to use the uniform distribution over the whole object space X (or, if we do not do mini-batching, we need to use the full space X). However, the choice of X seems arbitrary. For example, for MNIST data we may consider all real-values 28x28 matrices, where all elements lie on the segment [0,1]. Or, we could use the full space R^28x28. Or, we could use only the support of the empirical distribution. I have several concerns here:\n3.1) If the particles are parametric, the solution may greatly depend on the choice of X. As the empirical distribution has a finite support, it would be dominated by other points unless the data points are reweighted. And as the likelihood does not depend on the out-of-dataset samples, all particles f^i would collapse into prior, completely ignoring the training data.\n3.2) If the prior is non-parametric, f(x) for all out-of-dataset objects x would collapse to the prior, whereas the f(x) for all the training objects would perfectly match the training data. Therefore we would not be able to make non-trivial predictions for the objects that are not contained in the training set unless the function-space kernel of the function-space prior somehow prevents it. This poses a question: how can we ensure the ability of our particles to interpolate and extrapolate without making them parametric? Even in the parametric case, if we have no additional regularization and flexible enough models, they could overfit and have a similar problem.\nThese two concerns may be wrong, as I did not fully understand how the function-space prior distribution works, and how the function-space kernel is defined (see concern 4).\n\n4) Finally, it is not stated how the kernels for function-space POVI are defined. Therefore, it is not clear how to implement the proposed technique, and how to reproduce the results. Also, without the full expression for the weight-space update, it is difficult to relate the proposed procedure to the plain weight-space POVI with the function value kernel, discussed in Appendix B.\n\nMinor comments:\n1) It is hard to see the initial accuracy of different models from Figure 3 (accuracy without adversarial examples). Also, what is the test log-likelihood of these models?\n2) It seems that sign in line 5 on page 4 should be '-'\n\nI believe that this could be a very strong paper. Unfortunately, the paper lacks a lot of important details, and I do not think that it is ready for publication in its current form.", "Thank you for clarification; the paper has become much more clear after the revision, and the extended experiments provide additional motivation/justification for the proposed approximations. Most of my concerns have been answered.\n\nI still have several questions and comments:\n\n1) I am still convinced that the predictive performance of the model would greatly depend on the choice of mu, although this choice *seems* to be arbitrary. However, now I see it differently: as we are using the parametric approximation, it is not possible to perfectly approximate the posterior on the full set of X. By choosing different distributions mu, we can choose the regions of the input space in which we would like to approximate the posterior better. For example, if we seek better predictive performance, it makes perfect sense to use the KDE of the training data and use unlabeled data to gather more information about the domain. If one is interested in good out-of-domain uncertainty, it would make sense to add some out-of-domain data. I believe that the careful choice of mu is a nice way to incorporate the domain information in the trained discriminative model.\n\n2) How would one balance B' and B-B'? How does it impact the uncertainty and predictive performance? What values of B and B' do you use in the experiments?\n\n3) What do you mean by \"f(x) is already high-dimensional for a single data point x, one should down-sample f(x) within each x\"? If I understand correctly, dim f(x) is equal to the number of classes, which is typically not very high. Could you elaborate more on the mentioned down-sampling? How exactly is it performed, and why is it needed?\n\n4) I am not sure whether the accuracy degradation in Table 10 can be attributed to adversarial robustness. If I understand correctly, you do not perform adversarial training or otherwise address adversarial examples explicitly. Still, it would be interesting to see how the choice of B' and the parameters of KDE would influence the prediction performance and the robustness to adversarial examples. I suspect that it is possible to obtain some trade-off.\n\nTo sum up:\n+ The paper provides an elegant way to perform function-space posterior inference that does not suffer from overparameterization\n+ The paper provides a nice way to explicitly choose the regions in the input space in which the posterior should be approximated better\n- Although the approximations are well-motivated, there are many of them. It is not clear how well does the proposed procedure correspond to the true posterior inference, or how different parameters of the procedure impact the inference.", "Thank you for your positive and constructive feedback. We have incorporated them in the revision. As you expressed some confusion about our proposed method and contribution, we first present an informal but more high-level explanation; responses to your individual questions are presented following that.\n\n####### our high-level idea, its difference to weight-space SVGD, and contribution #######\n\nAs stated in Section 2 of the revision, the vanilla SVGD performs Bayesian inference of BNNs in the space of network weights (namely “weight-space inference”). While such a view is natural, our key observation is that it can make the problem unnecessarily hard. Namely, SVGD works by placing particles in high-probability regions, and at the same time making sure they are distant from each other, so they constitute a representative sample of the posterior. But in the weight-space posterior of BNN, there exists (at least) exponentially many numbers of posterior maximas that can be distant from each other, but corresponding to the same function. So a possible convergence point is where each SVGD particle occupies one of them; and in prediction, this posterior approximation does not improve over a single point estimate. In other words, a good approximation of the weight-space posterior isn’t necessarily good in function space.\n\nWe propose to do SVGD in function space. The key difference is that distance metrics are now defined on functions, so networks corresponding to the same function are far less likely to be represented by multiple particles (see Remark 3.2 and Appendix D in our revision). This is non-trivial as the function space is infinite-dimensional, but we have built an effective approximation that is very easy to implement.\n\nWe believe our contribution is significant, as\n(1) our proposal is very easy to implement, and performs extremely well in practice;\n(2) inference in over-parameterized models has attracted attention for long (see the first paragraph of Section 4). By shifting the focus to the prediction function instead of its specific parameterization, our work presents a novel solution to this problem.", "####### Response to Technical Questions #######\n\nQ1. Elaboration of the update rule (2) (now Eq. (3) in revision):\nA1: The update rule (3) corresponds to doing one step of gradient descent to minimize the squared distance between f(X;theta) and f(X;theta)+\\epsilon*v(f(X;\\theta)). We clarified this in Remark 3.1 in our revision.\n\nQ2. What is \\Epsilon(q(f(x)))?\nA2: Thanks for pointing out. We revised the text to clarify it. \\mathcal{E}(q[f(x)]) is an energy functional defined on the space of (variational) distributions, which achieves its unique minimum at the true posterior p. Its exact form varies across different POVI methods. Please refer to [Chen et al., 2018] for the exact form.\n\nQ3. Why (3) (now Eq (4) in revision) involves the likelihood term:\nA3: In fact, any kinds of gradient flow (GF) that corresponds to a Bayesian inference procedure will include model posterior, and subsequently through the Bayes' rule, the likelihood term. The GFs considered in this work are defined in the newly added Eq. (1), in which v is defined in Table 1. You can see there that they all include the posterior term. We have revised the paragraph you mentioned to make this clearer.\n\nQ4. why we sampled from both the training set and some measure \\mu:\nA4: Thanks for pointing out. In fact, this is a typo. The correct statement is that we only need to sample from \\mu, which includes samples from the training set. We sincerely apologize for the confusion, and have fixed it in the revision. We have also revised Section 3.1.2 to clarify why we need samples from both the training set and another distribution with a full support. An intuitive explanation is:\n(1) firstly, to make sure prediction interpolates and extrapolates, at least some components of \\mu (recall it is defined on X^B, and a sample from \\mu contains B points in input space) must visit the entire input space, i.e. have a full support. So these components can’t only include the training points, and we incorporated a full-support component for this purpose.\n(2) if samples from \\mu does not contain training points with a non-zero probability, we will have trouble in computing the likelihood, as it is only defined (in closed form) on training samples. Therefore, we added the training sample component.\n\nQ5. On whether the proposed method will converge:\nA5: We have added convergence plots in Appendix A.2.4 and Appendix C. We can see that in practice, the proposed procedure is fairly robust, and has no convergence issues. Moreover, our algorithm is an approximation to an idealized algorithm: the simulation of the ``averaged gradient flow’’ in paragraph “Theoretical Justification” of Section 3.1.2. As we discussed there, simulation of the averaged GF converges to a unique optima (i.e., the true posterior).\nFinally, like much previous work, we do introduce some approximations (e.g., stochastic approximations to the averaged GF and parametric approximation to particle functions). Developing a theory that simultaneously considers all approximations is very hard, so we present thorough empirical evidence: in Section 5.1, the approximate posterior obtained by our algorithm is of high quality; and in the rest of Section 5, the approximation works well in real-world. Furthermore, we have added numerical studies on the impact of these approximations in Appendix C. These results suggest that our method will be of much value to our community.\n\n####### Clarity issues #######\n\nThanks for your suggestion. We revised the paper thoroughly to clarify some ideas that may confuse the potential audience.\n\n####### References #######\n\n[Chen et al., 2018]: Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, and Liqun Chen. A unified particleoptimization framework for scalable bayesian sampling. arXiv preprint arXiv:1805.11659, 2018.\n[Ambrosio et al., 2008]: Luigi Ambrosio, Nicola Gigli, and Giuseppe Savare ́. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.", "Thank you for your positive and constructive feedback, and we have incorporated them into the revision. We address individual questions below.\n\nQ1: On the need of architectural search:\nA1: We address this concern from three aspects, as detailed below.\n(1) It is true that the degeneracy of weight-space POVI worsens as architecture complexity increases. We presented such an evaluation in Fig. 5, Appendix A.1. In the revision, we also added experiments on some UCI datasets using narrower network architectures in Appendix A.2.4, in which the performance of weight-space POVI methods improves, but is still outperformed by function-space methods.\n(2) We note that our previous experiment setups are standard as in BNN literature, and are fair to all baselines: in comparisons to almost all baselines, we followed the setup in their original paper. The only exceptions are the synthetic data experiments in Appendix A.1, which explicitly evaluate the impact of model complexity; and the ResNet-32 experiment, which, to our knowledge, is not considered by any previous work in BNN inference.\n(3) Finally, to model high-dimensional datasets, it is necessary to use large networks. For example, ResNet-32 has 0.4 million parameters; and the RNNs used in language modeling has hundreds of hidden units in each layer. Complex models lead to more severe over-parameterization, and BNN benchmarks should reflect this property. While we agree on the importance of evaluating different BNN architectures, e.g., for regression tasks, it requires a huge amount of work, given the number of baselines we considered, and such an evaluation should be a separate effort.\n\nQ2: On the possibility of introducing parameter constraints to improve weight space VI:\nA2: We agree that this is a good idea. However, it is hard to implement in practice, because of two reasons: 1) First, it is hard to cover all sources of unidentifiability by imposing parameter constraints. As an example, we show that order constraints alone cannot ensure identifiability: suppose we are to learn the ReLU function with a single hidden layer and a single hidden unit. We can scale the weights of the two connections in the network appropriately, and obtain an infinite number of modes; 2) Second, it is often non-trivial to apply gradient-based optimization under such constraints, and even if such an optimization scheme is possible, its impact on the learning dynamics (of variational parameters) is not clear.\nIn contrast, our method eliminates all sources of unidentifiability, has a simple form similar to (unconstrained) gradient descent, and converges robustly in practice. We revised the paper to include further discussions on this, in Remark 3.1 and 3.2 where we show the resemblence of our algorithm to gradient descent; and in Appendix A.2.4 where we presented empirical evidence that our algorithm converges robustly in practice. In conclusion, we believe our method is a more practical solution to unidentifiability.\n\nQ3: On the number of particles needed, and scalability:\nA3: We address this concern from three aspects, as detailed below: \n(1) We chose to use 20 particles following the original SVGD paper, so the results are comparable; and as we have discussed in the text (Remark 3.2), our method is far more particle-efficient compared to the weight-space POVI methods.\n(2) We hypothesize that even with a small number of particles, function-space methods could produce posterior approximations that are useful in practice. A reason is that the structure of “function-space posterior” (at least in the finite-dimensional case, in which its density is well-defined) is often strikingly simple: for GP regression with conjugate likelihood, for example, the posterior is essentially uni-modal (Rasmussen, 2004). Empirically this hypothesis is supported by our experiments on CIFAR-10, in which a posterior approximation using merely 8 particles is shown to be more robust against adversarial examples, which is presumably due to the improved representation of epistemic uncertainty.\n(3) Finally, in terms of scalability, our method can be easily parallelized using model parallelism, as the only communication needed in each iteration is to broadcast the top-layer activations (function evaluations on the mini-batch). This cost is negligible compared to sending all network weights, which is needed in data-parallel training and weight-space POVI methods. We added the discussion on scalability in Appendix D.\n\nReferences:\n[Rasmussen, 2004]: Carl Edward Rasmussen. Gaussian processes in machine learning. In Advanced lectures on machine learning, pp. 63–71. Springer, 2004.", "Thank you for your positive and constructive feedbacks. We revised the presentation thoroughly following them. However, there are also some misunderstandings that we hope to clarify.\n\nQ: On lack of details:\nA: We clarified the kernel specification issue you mentioned in Section 3.1.2. Our implementation will also be made public after the review process to make everything reproducible. \n\nQ: On the justification of approximations:\nA: Thanks for the suggestions. First, as you suggested, we have added simulations evaluating the impact of the parametric approximation and mini-batching in Appendix C. The result shows they do not influence convergence. Second, we have thoroughly revised the justification for stochastic approximation in Section 3.1.2. \nFinally, please note that your comment on the choice of \\mu being an approximation could be a major understanding, which we clarify below.\n\nRegarding your major concerns:\n\nQ1. Regarding the weight space update (2) (now Eq.(3)):\nA1: Thank you for the comments. Yes, your explanation is correct. We have revised Section 3.1.1 to clarify it. In brief, such an update is easy to implement, relates to ensemble training, and does not impact convergence empirically. Please read the revision for full details.\n\nQ2. Regarding the theoretical justification of (3) (now Eq.(4)):\nA2: Thanks for the comments. We clarify from three points: \n(1) On your concern that marginals could not be matched exactly: we added a paragraph in the `Justificiation of (4)` part in Sec 3.1.2. The point is that even in that case, the average energy is an excellent choice of variational objective, as in practice we only care about average approximation error of lower-order moments; and in the definition of the averaged energy, \\mu could be specified to incorporate distributional assumptions about the test set.\n(2) On convergence: in Appendix C, we added synthetic experiments specifically verifying that parametric approximation plus mini-batching does not impact convergence. We also added a convergence plot for a real-world dataset in Appendix A.2.4, which shows our algorithm is fairly stable in practice.\n(3) However, a rigorous theory that simultaneously addresses both approximations could be hard to develop, as our procedure works on the infinite-dimensional Wasserstein spaces, a Riemannian manifold with a non-Euclidean metric; and optimization (i.e. simulation of gradient flows) on Riemannian manifolds is less well-studied than their Euclidean counterparts. We are not aware of any immediate results to this answer.\nAlthough a bit heuristic, our method is well-motivated, works well empirically, and has an elegant, easy-to-implement form. We hope you agree that such a procedure could be of value to the community.\n\nQ3. On your belief that we need a uniform \\mu:\nA3: There is some misunderstanding that might be caused by previous typos and over-restrictive conditions in Section 3.1.2. We apologize for the confusion; the text has been revised, and we also clarify this issue below. In fact, the only requirement on \\mu is that\n\n if q(f(x))=p(f(x)|X_{train},Y_{train}) almost everywhere w.r.t. \\mu(x), then q(f) and p(f|X_{train},Y_{train}) defines the same stochastic process.\n\nFor example, if the posterior is a GP, a sufficient condition is that B>=2, and a single sample from \\mu consists of samples from a continuous measure supported on the entire X, as well as samples from the training set. According to the condition above, \\mu don’t need to have identically distributed components (recall it is a distribution on X^B, and a sample from \\mu contains B samples in the input space), or follow the uniform distribution.\nAs mentioned in the second to last paragraph in Section 3.1.2, our choice of \\mu is the product measure of mini-batch from the training set, and samples from its kernel density estimation.\n\nQ3.1. On your concern that training set could be ignored:\nA3.1: As clarified above in our response to Q3, a single sample from \\mu contains both training data and samples from a continuous distribution, so training data is not ignored.", "Q3.2-i: On your concern that f(x) could collapse to prior:\nA3.2-i: You are correct that the function-space prior prevents it: while priors used in practice (like the common weak Gaussian prior on NN weights) are flexible, their flexibility is far from letting out-of-sample prediction collapse to the prior, as they all encode smoothness in some sense (See the example in [^1] below). The particle functions in your example are a.s. not continuous, and are not inside the support of the prior. As the support of posterior must be the subset of that of prior, the situation your described *will not happen*. You can also see it from the synthetic experiments in Section 5.1.\n[^1]: E.g. for GP with a RBF kernel, samples from prior are a.s. continuous; the BNN with truncated priors only represents functions with a bounded Lipschitz constant. (It is possible to do inference using non-smooth priors, but they must also encode correlation, and can be analyzed similarly. It seems that the only prior satisfying your description is an i.i.d. noise process, which should not be used in practice anyway.)\n\nQ3.2-ii: A further clarification on overfitting in parametric models:\nA3.2-ii: (1) It is still possible that with a pathological prior (or one that is too weak), prediction based on full posterior still overfits (See this blog article and references therein: http://www.nowozin.net/sebastian/blog/do-bayesians-overfit.html). However, this is not a problem that can be solved by the inference side, but rather the problem of specifying right priors. In other words, if the prior is pathological, a faithful inference procedure should make the user aware of that. \n(2) Also, even for imperfect priors, Bayesian inference guards against overfitting much better than the MAP estimate, and the observation that frequentist inference for NN overfits does not directly transfer to the Bayesian case. An intuitive explanation is that Bayesian inference performs model averaging, which weights each model point based on its complexity.\n\nQ4. On the definition of kernels in function-space POVI, and our method’s relation to the function value kernel:\nA6: (1) We apologize for the mistake; we added it back to Section 3.1.2. Please notice that gradient flows are defined on B-dimensional marginal distributions, thus kernels in our algorithm are defined on finite-dimensional spaces, and can be easily specified.\n(2) As for the relation between our proposal and the function-value kernel, we have added a new Appendix D, in which we derived the detailed update rules and compared them. We hope it clarifies your concerns.\n\nFinally, for your minor comments, we address them as detailed below:\n\n1. For the adversarial defense experiment, we have added the initial accuracy and log likelihood in Appendix A.3. \n2. We have fixed the typo on Page 5, line 4. Thanks for pointing it out.", "- We revised Section 3 thoroughly. Please notice the change of equation numbering.\n- We added three remarks in Section 3.1.1, discussing the motivation of the parametric update rule (2) (i.e., Eq (3) in the revision); comparing it with weight-space POVI and ensemble training; and noting that it does not impact convergence in synthetic experiments.\n- We revised Section 3.1.2 to clarify a few possible misunderstandings, including the requirements of the sampling distribution \\mu and the specification of the kernel.\n- In Appendix A (experimental details and further results), we added convergence plots and benchmark for narrower network architectures on UCI datasets in Section A.2.4; and accuracy and log likelihood on clean MNIST and CIFAR-10 data in Section A.3.\n- We added a new appendix, Appendix C, where we evaluated the impact of several approximations in our algorithm.\n- We added a new appendix, Appendix D, where we further clarified the connection and differences between our algorithm, weight-space POVI, and ensemble methods.\n- We revised the language and fixed a few typos, most notably:\n - In Algorithm 1, we fixed a typo where previously, we erroneously required to sample from \\mu *and* a batch from the training set. In the corrected version, the training-set batch is *part of* samples from \\mu.\n - In Appendix A.2.3, we fixed the results of the “protein” dataset in Table 7 (comparison with Ma et al. (2018)). The previous result was obtained using an incorrect configuration, due to a scripting error. We apologize for the inconvenience; however, *neither the comparison result on that specific dataset, or the conclusion of the corresponding experiment is influenced*.\n", "This paper considers particle optimization variational inference methods for Bayesian neural networks. To avoid degeneracies which arise when these algorithms are applied to the weight space posterior, the authors consider applying the approach in the function space. A heuristic motivation is given for their algorithm and it seems to have good empirical performance.\n\nI find the paper well-motivated and the suggested algorithm original and interesting. As the authors mention at one point the derivation is rather heuristic, so much depends on the empirical assessment of their approach. I was wondering if it was worthwhile to include an architecture search of some kind in the empirical comparisons in the examples? This is because if wider than needed hidden layers are used this will worsen some of the degeneracies of the weight space posterior which could make the weight space algorithms perform worse. Also the authors use a Gaussian process approximation in part of their algorithm and wide hidden layers make that approximation more reasonable and may advantage their approach for that reason too. The authors discuss in Appendix B other approaches to improving weight space POVI. I wonder also if parameter constraints would be helpful for improving the performance of the weight space methods, such as order constraints on the hidden layer biases for example to remove at least some of the sources of unidentifiability. The authors talk in the introduction about the difficulties of exploring a complex high-dimensional posterior, the curse of dimensionality, and the limitations of current variational families but only 20 points are used to represent the posterior in the examples. Are many more particles required to obtain good performance in more complex models and does the approach scale well in terms of its computational requirements in that sense?" ]
[ -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "ByeUSNVqCX", "HJlsKYptRX", "SJlJ9HiQCQ", "iclr_2019_BkgtDsCcKQ", "iclr_2019_BkgtDsCcKQ", "BJeUUXsQ0X", "Hyx6dqfunQ", "SylRwHjQAm", "rJepoAru2m", "HyloUTy5nQ", "H1lfE7o7Cm", "iclr_2019_BkgtDsCcKQ", "iclr_2019_BkgtDsCcKQ" ]
iclr_2019_BkgzniCqY7
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbation by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp-norm distortion (p∈ {1,2,∞}) as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results on MNIST, CIFAR-10 and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions) through adversarial saliency map (Paper-not et al., 2016b) and class activation map (Zhou et al., 2016).
accepted-poster-papers
This paper contributes a novel approach to evaluating the robustness of DNN based on structured sparsity to exploit the underlying structure of the image and introduces a method to solve it. The proposed approach is well evaluated and the authors answered the main concerns of the reviewers.
train
[ "ByxVIQD2yE", "SJgf67ndCX", "BygM1K4c37", "HkeSkuLdRQ", "S1x4tLIOR7", "SyerbE8uCX", "r1gjPN8uRm", "ByxX-z6E67", "B1xz04MWaX", "rkl0Xux-aX", "HJl2S2JA3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "Thanks for incorporating feedback, the additional related work section is helpful and provided better context for this work.", "Thank you for revising your paper, the new version seems to be more clear to me in terms if the positioning of your work. I have bumped up the numerical score to 6 in my review.", "The paper proposes a method to find adversarial examples in which the changes are localized to small regions of the image. A group-sparsity objective is introduced for this purpose and it is combined with an l_p objective that was used in prior work to define proximity to the original example. ADMM is applied to maximize the defined objective. It is shown that adversarial examples in which all changes are concentrated in just few regions can be found with the proposed method.\n\nThe paper is clearly written and results are convincing. But what I am not sure I understand is what is the purpose of this research. Among the 4 contributions listed in the end of the intro only the last one, Interpretability, seems to have a potential in terms on the impact. Yet am not quite sure how “obtained group-sparse adversarial patterns better shed light on the mechanisms of adversarial perturbations”. I think the mechanisms of adversarial perturbations remain as unclear as they were before this paper.\n\nI am not ready to recommend acceptance of this paper, because I think the due effort to explain the motivation for research and its potential impacts has not been done in this case. \n\nUPD: the discussion and the edits with the authors convinced me that I may have been a bit too strict. I have changed my score from 5 to 6.\n", "We thank all reviewers for their insightful and valuable comments. Our paper has been greatly improved based on these comments. The major modifications are summarized as below.\n\na) We have enriched our related work and provided a better motivation on StrAttack (See Introduction).\n\nb) To strengthen our contribution on the effectiveness of StrAttack, we have added experiments to show the attack performance of StrAttack against robust adversarial trained model [Madry et al. 2018]; see our results in Table 2. Moreover, we have compared the transferability of StrAttack with other attacks on 6 different network models (Table 3).\n\nc) We have added more examples to show the better interpretability of StrAttack, where the found sparse adversarial patterns have a better correspondence with class-specific discriminative regions localized by CAM; see Figure 4.\n\nThanks!\nICLR 2019 Conference Paper689 Authors", "As we discussed earlier, the motivation of our research is to seek a more effective attack, which can be as successful as existing attacks (in terms of achieving the same attack success rate and keeping small L1, L2 and L_infty distortion), but only requires to modify a small subset of pixels. We show that StrAttack is indeed the desired adversarial attack. \n\n\nIn the revised paper, we strengthen the potential impacts of StrAttack from the aspects a) performance of attacking robust adversarially trained model, a) attack transferability, and c) interpretability of complex images. \n\n\nFirst, we show the powerfulness of StrAttack to attack the defensive model obtained from robust adversarial training [Madry et al. 2018], which is commonly regarded as the strongest defense on MNIST. As we can see, although StrAttack perturbs much less pixels, its attack success rate does not drop. This implies that we could perturb less but ‘right’ pixels (with better interpretable adversarial patterns) without losing its attack performance. \n\nSecond, we compare the transferability of StrAttack to other attacks. Here the transferability is characterized by the attack success rate of adversarial examples (found by one attack generation method against a given network model) transferred to another different network model. We present the transferability of 3 attacks from the model Inception V3 to model Inception V3, Inception V4, ResNet 50, ResNet 152, DenseNet 121 and DenseNet 161. As we can see, StrAttack yields the highest transferability almost at every model. We refer the reviewer to Table 3 for more details. \n\n\nThird, we show more examples to visualize the interpretability of adversarial perturbations on certain complex images. In the ‘pug’-‘street sign’ example of Fig. 4, objects of the original label (pug) and the target label (street sign) exist simultaneously. As we can see, adversarial perturbations generated from StrAttack are perfectly matched to the most discriminative image regions localized by CAM: the adversary shows suppression on the discriminative region of the original label and promotion on the discriminative region of the target label. By contrast, the CW attack is less interpretable due to the high noise visibility (perturbing too many pixels).\n\n", "We thank the reviewer for the positive comments, and answer your specific questions as below.\n\na) In the revised version we have added more related work; see Introduction. Our structure-driven attack is motivated by devising a more efficient attack that takes advantages of two attacks using extremely opposite principles - C\\&W attack (or \\ell_infty attacks such as I-FGSM) that modifies all pixels, and one-pixel attack (Su et al., 2017) that only modifies a few pixels. The C\\&W attack can achieve small \\ell_infty perturbations but has to perturb most pixels (large \\ell_0 norm). Although the one-pixel attack can achieve extremely small \\ell_0 norm but with much higher \\ell_infty norm and low attack success rate. Both the above attack methods lead to higher noise visibility due to perturbing too many pixels or perturbing a few pixels too much. Motivated by them, we wonder if there exists a more effective attack that can be as successful as existing attacks but only requires to modify a small subset of pixels, and what the resulting sparse adversarial pattern can tell. To answer these questions, we propose StrAttack which achieves strong group sparsity without losing attack effectiveness including both attack success rate and Lp distortion. Furthermore, we show that the resulting sparse adversarial patterns offer a great interpretability through adversarial saliency map (ASM) and class activation map (CAM).\n\nb) The proposed StrAttack problem formulation cannot be solved using standard optimization solvers, e.g., Adam, or proximal gradient algorithm, etc, due to the presence of non-smooth regularizers and hard constraints. To address this technique challenge, we proposed the ADMM solution that splits the original complex problem into neat subproblems, each of which yields an analytical solution.\n\nc) We investigated the group sparsity by exploring various mask sizes. Clearly, there is a trade-off between the group size and the representation of local regions. Large mask size tends to make StrAttack insensitive to the structure of local regions. In the experimental evaluation of this paper, the best mask size that we empirically found are 2x2 MNIST/CIFAR-10 and 13x13 for ImageNET respectively.\n\nLast but not the least, to strengthen the effectiveness and the interpretability of StrAttack, in the revised version we present the potential impacts of StrAttack from a) performance of attacking robust adversarially trained model (Table 2), b) attack transferability (Table 3), and c) interpretability of complex images (Figure 4). ", "We thank the reviewer for the positive comments on our work. In addition to technical contributions from the algorithmic perspective, we would like to emphasize that StraAttackt identifies (group-wise) sparse adversarial patterns that make attacks successful, but without incurring extra pixel-level perturbation power compared to other existing attacks such as CW. The resulting sparse adversarial pattern also offers a visual explanation through adversarial saliency map (ASM) and class activation map (CAM). Effectiveness and interpretability of StrAttack reveals the ‘right’ pixels that an attacker should perturb to boost the attack performance. To strengthen this contribution, in the revised version we present the potential impacts of StrAttack from a) performance of attacking robust adversarially trained model (Table 2), b) attack transferability (Table 3), and c) interpretability of complex images (Figure 4). \n", "Thank you for the clarifications, in particular for the item (a), that explains better why this research is important. I will take a look at the revision when you upload it and I will consider reevaluating your paper. ", "This paper proposes a method for adversarial attacks on DNNs (StrAttack), designed to exploit the underlying structure of the images. Specifically, incorporating group-sparsity regularization into the generation of the adversarial samples and using an ADMM based implementation to generate the adversarial perturbations.\n\nThe paper is structured and written well, with clear articulation of technical details. The experiments and reported results are comprehensive, and clearly showcase the efficacy of the proposed solution. I'm not enough of an expert on the subject matter to comment about the novelty of this proposed approach. However, it would help to elaborate more on the related work (section.7) with clear contrasting of current method esp. using structural information for adversarial samples - theoretical implications, underlying rationale and importantly calling out the benefit over the previous lp - norm based approaches?\n\nRegarding group sparsity - it is unclear as to the assumed structural constraints, is the sliding mask expected to be only 2x2, 13x13 (for MNIST/CIFAR-10, ImageNET respectively) ? impact of larger/smaller or skewed sizes ? sensitivity to image types?\n\n", "We really thank the reviewer for the insightful comments. As a prompt response, we would like to use this opportunity to reiterate and clarify our motivation, contributions and their potential impacts. Meanwhile, we are also preparing a revision to better address the reviewer's comments.\n\na) The first contribution \"Structure-driven attack\" actually indicates the existence of a more stealthy pixel-level adversarial attack under the same norm-bounded threat model, which has not been entirely explored in existing attacks. The motivation of our research stems from devising a more efficient attack that takes advantages of two attacks using extremely opposite principles - C\\&W attack (or \\ell_infty attacks such as I-FGSM) that modifies all pixels, and one-pixel attack (Su et al., 2017) that only modifies a few pixels. The C\\&W attack can achieve small \\ell_infty perturbations but has to perturb most pixels (large \\ell_0 norm), while the one-pixel attack can achieve extremely small \\ell_0 norm but with much higher \\ell_infty norm.\n\nBoth attack methods may lead to higher noise visibility due to perturbing too many pixels or perturbing a few pixels too much. Motivated by these attack methods and under the same threat model (e.g., \\ell_infty constraint), we wonder if there exists a more effective attack that can be as successful as existing attacks but only requires to modify a small subset of pixels. We show that StrAttack is indeed the desired adversarial attack. It is also worth mentioning that one pixel attack has much lower attack success rate on ImageNet than CW and ours. \n\nConsequently, the impacts of StrAttack include (i): understanding why the identified regions in the image are vulnerable to adversarial attacks; and (ii) investigating how the identified attack sparse patterns can benefit adversarial attacks/defenses\n\nb) The second and the third contributions are our technical contributions from the algorithmic perspective. The results indicate that powerful attacks could be derived from more advanced optimization techniques. Note that the proposed StrAttack problem formulation cannot be solved using standard optimization solvers, e.g., Adam, or proximal gradient algorithm, etc, due to the presence of non-smooth regularizers and hard constraints. To address this technique challenge, we proposed the ADMM solution which is quite new for finding adversarial perturbations and enjoys the benefit of having an analytical solution at every ADMM subproblem.\n\nc) We thank R2 for acknowledging interpretability as an impactful contribution. The proposed idea indeed helps researchers to better explain and visualize the effect of adversarial perturbations. Our experimental results, e.g., Figure 1 and 3, clearly show that why we could perturb less but `right` pixels (with group-sparse patterns) to fool DNNs. Those `right` pixels are the most sensitive pixels to affect the output of classifiers, checked by adversarial saliency analysis in Sec. 6. They also correspond to the most discriminative region of a class activation map, which demonstrates the interpretability of the proposed structured attack. Also, we would like to clarify that \"The mechanisms of adversarial perturbations \" meant the above findings. Based on the feedback, we now realize that 'mechanisms' might not be the best word to describe our contribution, and thus we will rephrase our claim and make it clearer and more accurate. Note that many adversarial attack methods were proposed in the literature, however, few of them linked interpretability with adversarial examples.\n\n", "The paper proposes a novel approach to generate adversarial examples based on structured sparsity principles. In particular the authors focus on the intuition that adversarial examples in computer vision might benefit from encoding information about the local structure of the data. To this end, lp *group* norms can be used in contrast to standard global lp norms when constraining or penalizing the optimization of the adversarial example. The authors propose an optimization strategy to address this problem. The authors evaluate the proposed approach on real data, comparing it against state-of-the-art competitors, which do not leverage the structured sparsity idea.\n\nThe paper is well written and easy to follow. The presentation of the algorithms for i) the non-overlapping and ii) overlapping groups as well as iii) the proposed refinement are clear. The experimental evaluation is interesting and convincing (the further experiments in the supplementary material add value to the overall discussion). \n\nThe main downside of the paper is that the proposed idea essentially consists in replacing the standard \\ell_p norm penalty/constraints with a group-\\ell_p one. While this provides interesting technical questions from the algorithmic perspective, from the point of view of the novelty, the paper does not appear an extremely strong contribution, \n\n\n\n " ]
[ -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, 7 ]
[ -1, -1, 2, -1, -1, -1, -1, -1, 2, -1, 3 ]
[ "SyerbE8uCX", "HkeSkuLdRQ", "iclr_2019_BkgzniCqY7", "iclr_2019_BkgzniCqY7", "ByxX-z6E67", "B1xz04MWaX", "HJl2S2JA3X", "rkl0Xux-aX", "iclr_2019_BkgzniCqY7", "BygM1K4c37", "iclr_2019_BkgzniCqY7" ]
iclr_2019_Bkl-43C9FQ
Spherical CNNs on Unstructured Grids
We present an efficient convolution kernel for Convolutional Neural Networks (CNNs) on unstructured grids using parameterized differential operators while focusing on spherical signals such as panorama images or planetary signals. To this end, we replace conventional convolution kernels with linear combinations of differential operators that are weighted by learnable parameters. Differential operators can be efficiently estimated on unstructured grids using one-ring neighbors, and learnable parameters can be optimized through standard back-propagation. As a result, we obtain extremely efficient neural networks that match or outperform state-of-the-art network architectures in terms of performance but with a significantly lower number of network parameters. We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. Overall, we present (1) a novel CNN approach on unstructured grids using parameterized differential operators for spherical signals, and (2) we show that our unique kernel parameterization allows our model to achieve the same or higher accuracy with significantly fewer network parameters.
accepted-poster-papers
The paper presents a simple and effective convolution kernel for CNNs on spherical data (convolution by a linear combination of differential operators). The proposed method is efficient in the number of parameters and achieves strong classification and segmentation performance in several benchmarks. The paper is generally well written but the authors should clarify the details and address reviewer comments (for example, clarity/notations of equations) in the revision.
test
[ "r1lhQZeq37", "SJeO8PlFAX", "SygAhSlF0Q", "rJgo4QetAQ", "H1gjxtiv6X", "rkxqEcLqsX", "HklBQtGu9m", "B1xhqVz_qQ" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This article introduces a simple yet efficient method that enables deep learning on spherical data (or 3D mesh projected onto a spherical surface), with much less parameters than the popular approaches, and also a good alternative to the regular correlation based models.\n\nInstead of running patches of spherical filters, the authors takes a weighted linear combination of differential operators applied on the data. The method is shown to be effective on Spherical MNIST, ModelNet, Stanford 2D-3D-S and a climate prediction dataset, reaching competitive/state-of-the-art numbers with much less parameters..\n\nLess parameters is nice, but the argument could be strengthened if the authors could also show impressive results in terms of runtime. Typically number of parameters is not a huge issue for today’s deep networks, but for real-time robotics to be equipped with 3D perception, runtime is a much bigger factor.\n\nI also think that the Stanford 2D-3D-S experiments have some issues:\n\nUNet and FCN-8s are good baselines, but other prior work based on spherical convolution are omitted here. E.g. S2CNN and SphereNet. S2CNN has released their code so it should be benchmarked.\n\nAdditionally, comparison to PointNet++ could be a little unfair. \n\ni) What is the number of points used in PointNet++? The author reported 1000 points for ModelNet which is ok for that dataset but definitely too small for indoor scenes. The original paper used 8192 points for ScanNet indoor scenes.\n\nii) Point-based can have data-augmentation by taking subregions of the panoramic scene, where as sphere-based method can only take a single panoramic image. The state-of-the-art method (PointSIFT) achieves ~70 mIOU on this dataset. PointNet(++) can also achieve 40-50 mIOU. Maybe the difference is at using regular image or panoramic images, but the panoramic image is just a combination of regular images so I wouldn’t expect such a large difference.\n\nIn conclusion, this paper proposes a novel deep learning algorithm to handle spherical data based on differential operators. It uses much less parameters and gets impressive results. However, the large scale experiments has some weaknesses. Therefore I recommend weak accept.\n\n----\nSmall issues / questions:\n\n- Notation lacks clarity. What are x, y in Eqn. 1? The formulation of convolution is not very clear to me, but maybe due to my lack of familiarity in this literature.\n\n- In Figure 1, the terminology of “MeshConv” is first introduced, which should come earlier in the text to improve clarity.\n\n- In the article, the author distinguished their method with S2CNN that their method is not rotation invariant. I don’t understand this part. In the architecture diagram, if average pool is applied across all spherical locations, then why is it not rotation invariant?\n\n===\nAfter rebuttal: \nI thank the authors for addressing the comments in my review. It clarifies the questions I had about on the 2D3DS dataset (panorama vs. 3D points). Overall I feel this is a good model and have solid experiments. Therefore, I raise the score to 7.", "Thank you for your thorough review and helpful comments. We will try to address your concerns and suggestions below:\n- Details on MeshConv\nWe have added additional references as well as details for implementation of mesh differential operators in the Appendix. Additionally, we make our code anonymously available for reproducibility. Please check the code at the link below:\nhttps://drive.google.com/open?id=1z-hy3NVQtPxNcyDsRz-LqulwqDxNqAMo\n\n- Coordinate-dependence of the method (singularity at the poles).\nThe method is coordinate dependent and coordinate singularity is an actual problem during implementation of the method. However, several tricks can be implemented to mitigate this issue. First, we use a spherical mesh subdivided from a base icosahedron that does not have a vertex that is at the pole. Then, all subsequent vertices will not be exactly residing on the pole, and numerically the singularity will not occur. Second, we always mute the signal at the poles (i.e., pad with zero). In practice this work extremely well, and tends not to affect the results. The major reason for rotating the spherical MNIST to the equator is in fact due to rotational invariance, since projecting the digits to the pole will turn the gradient components into radial and azimuthal ones, rendering the filters rotationally (around upward z axis) equivariant and the overall network invariant to rotations around z axis. We added discussions about its limitations in the revised paper.\n\n- Steerable CNNs\nThank you for the suggestion. We have added the reference to the corresponding section.\n\n- Orientable models with equivariant layers\nIndeed equivariant convolutional operators do not prevent the network from being able to distinguish transformed versions of the same input. As per suggestion, we altered the original S2CNN network to be non-invariant by swapping the final global pooling layers with an average pool only in the gamma dimension (the extra dimension in SO(3) to for preserving equivariance), followed by a flattening operation in the spatial dimension. Furthermore, we added an additional fully-connected layer for enhanced representational power. Testing this network on MNIST dataset, we have the following findings:\n# of params: 162946\nAccuracy: 98.08\nWhich has more parameters and lower accuracy than our proposed model. The experiment results suggest that since these equivariant operators are specifically engineered to preserve equivariance, they tend to not be the most efficient for orientable tasks that do not require equivariance.\nAdditionally, to verify that orientability as been resolved, we compare the per-class accuracy for both the original S2CNN (rot-invariant version) and the modified S2CNN (not rot-invariant version). Below are the comparisons:\n\nDigit Class: 0 1 2 3 4 5 6 7 8 9 \n----------------------------------------------------------------------------------------------\nOriginal S2CNN: 0.99, 0.99, 0.98, 0.98, 0.96, 0.96, 0.98, 0.95, 0.96, 0.86\nModified S2CNN: 0.99, 0.99, 0.98, 0.99, 0.97, 0.99, 0.99, 0.97, 0.97, 0.98\n\nResults show that removing the final pooling layer drastically improves accuracy for the digit “9” due to orientability, but overall lower accuracy compared to our spherical network suggests weaker representational power.\n\n- Visualizations\nWe agree that visualizing the differential operators could be helpful for the reader. We have a visualization of an exemplary signal in Figure 1 that illustrates the differential operators.", "Thanks for your detailed and thorough review of our paper. We will try to address your questions and suggestions below:\n- Runtime\nWe evaluate the runtime for our classification network and compare with the PointNet++ model which is of comparable peak performance. We report these runtimes in Appendix D. Our best performing model achieves a 5x speedup compared with PointNet++.\n\n- 2D3DS Baseline (add S2CNN)\nS2CNN and SphereNet were originally designed and evaluated for classification tasks. We corresponded with the authors of S2CNN and extended upon the original S2CNN architecture for semantic segmentation. We include the S2CNN results on 2D3DS dataset in the revised Figure 4. We detailed the modified S2CNN architecture in Appendix E. The best mIoU from the modified S2CNN model is significantly lower than ours (0.2581 vs 0.3829).\n\n- What is the number of points used in PointNet++?\nThe number of points we use is 8192. We are using the same code from PointNet++ for the ScanNet task, where we do the data-augmentation by rotating around the z-axis and take subregions for the learning.\n\n- Difference between pano image and 3D point segmentation\nPointNet++ was initially designed for and tested on point clouds sampled from a 3D model that requires fusing multiple scans from various scan locations. Segmenting a single panorama, which is the setup in our experiment, is a much more challenging yet realistic task for engineering applications. A single view panorama poses multiple additional challenges, such as serious occlusions in the scene, noises in the depth map, and sparsity in the point cloud for objects that are far away from the viewpoint. We believe that all these problems can prevent the point-based method from achieving comparable results as in the original setup using uniformly sampled 3D points.\n\n- Notation lacks clarity\nThank you for pointing out the issue in notation clarity. In this context, x and y refer to the spatial coordinates that correspond to the two spatial dimensions over which the convolution is performed. Eqn 1 through 3 states the fact that since convolution (and cross-correlation) operators are linear, traditional convolution operators can be viewed as linear combinations of the original signal convolved with the basis functions of the kernel.\n\n- The terminology of “MeshConv”\nWe have added the definition of this terminology to the introduction section, before its occurrence in Fig. 1.\n\n- Why is this method not rotationally invariant\nThe method is not considered to be rotation invariant because the convolution operator is coordinate-dependent (depending on how the x-y coordinate vectors are defined on the manifold). Hence, the corresponding features will change due to a rotation, and the final pooled value will be different.\n", "Thank you for your response and constructive feedback! Below are some of our comments in response to your questions and suggestions:\n(1) Analysis of computational cost\nWhile the model computes second order spatial derivative (Laplacian) as a basis for the convolution operator, it ultimately amounts to a linear combination of these basis for the convolution step. Hence, training only involves first order gradients with respect to these weights to train (as opposed to using the Hessian). Generally, training time is difficult to benchmark as it involves many variables (hardware, DL framework etc.). However, we evaluate the inference runtime for our classification network and compare with the PointNet++ model which is of comparable peak performance. We report these runtimes in Appendix D. Our best performing model achieves a 5x speedup compared with PointNet++.\n(2) Intuitive justification\nWe appreciate your feedback. We have added more intuitive explanations to our paper in Sec 1 as well as the captions of Fig. 1. \n", "Summary:\nThe paper proposes a novel convolutional kernel for CNN on the unstructured grids (mesh). Contrary to previous works, the proposed method formulates the convolution by a linear combination of differential operators, which is parameterized by kernel weights. Such kernel is then applied on the spherical mesh representation of features, which is appropriate to handle spherical data and makes the computation of differential operators efficient. The proposed method is evaluated on multiple recognition tasks on spherical data (e.g. 3d object classification and omnidirectional semantic segmentation) and demonstrates its advantages over existing methods.\n\nComments/suggestions:\nI think the paper is generally well-written and clearly delivers its key idea/advantages. However, I hope the authors can elaborate the followings:\n\n1) Analysis of computational cost\nIt would be helpful to elaborate more analysis on computational cost. The proposed formulation seems to involve the second-order derivatives in the backpropagation process (due to the first-order derivatives in Eq.(4)), which can be a computational bottleneck. It will be very useful to provide analysis on computational cost together with parameter efficiency study (Figure 3 and 4).\n\n2) Intuitive justification\nIt would be great if the authors provide more intuitive descriptions on Eq.(4) (and possibly elaborate captions of Figure 1); what is the intuition of using differential operators? Why is it useful to deal with unstructured grids? How does it lead to improvement over the existing techniques?\n\nConclusion: \nOverall, I think this paper has solid contributions; the proposed MeshConv operator is simple but effective to handle spherical data; the experiment results demonstrate its advantages over existing methods on broad applications, which are convincing. I think conveying more intuitions on the proposed formulation and providing additional performance analysis will help readers to understand paper better. \n", "The paper presents a new convolution-like operation for parameterized manifolds, and demonstrates its effectiveness on learning problems involving spherical signals. The basic idea is to define the MeshConvolution as a linear combination (with learnable coefficients) of differential operators (identity, gradient, and Laplacian). These operators can be efficiently approximated using the 1-hop neighbourhood of a vertex in the mesh.\n\nIn general I think this is a strong paper, because it presents a simple and intuitive idea, and shows that it works well on a range of different problems. The paper is well written and mostly easy to follow. The appendix contains a wealth of detail on network architectures and training procedures.\n\nWhat is not clear to me is how exactly the differential operators are computed, and how the MeshConvolution layer is implemented. The authors write that \"differential operators can be efficiently computed using Finite Element basis, or derived by Discrete Exterior Calculus\", but no references or further detail is provided. The explanation of the derivative computation is:\n\"The first derivative can be obtained by first computing the per-face gradients, and then using area-weighted average to obtain per-vertex gradients. The dot product between the per-vertex gradient value and the corresponding x and y vector fields are then computed to acquire grad_x F and grad_y F.\"\nWhat are per-face gradients and how are they computed? Is the signal sampled on vertices or on faces? What area is used for weighting? What is the exact formula? What vector fields are you referring to? (I presume these are the coordinate vector fields). In eq. 5, what are F_i and F_j? What is the intuition behind the cotangent formula (eq. 5), and where can I read more? etc.\n\nPlease provide a lot more detail here, delegating parts to an appendix if necessary. Providing code would be very helpful as well.\n\nA second (minor) concern I have is to do with the coordinate-dependence of the method. Because the MeshConvolution is defined in terms of (lat / lon) coordinates in a non-invariant manner, and the sphere does not admit a global chart, the method will have a singularity at the poles. This is confirmed by the fact that in the MNIST experiment, digits are rotated to the equator \"to prevent coordinate singularity at the poles\". I think that for many applications, this is not a serious problem, but it would still be nice to be transparent and mention this as a limitation of the method when comparing to related work.\n\nIn \"Steerable CNNs\", Cohen & Welling also used a linear combination of basis kernels, so this could be mentioned in the related work under \"Reparameterized Convolutional Kernel\".\n\nTo get a feel for the differential operators, it may be helpful to show the impulse response (at different positions on the sphere if it matters).\n\nIn experiment 4.1 as well as in the introduction, it is claimed that invariant/equivariant models cannot distinguish rotated versions of the same input, such as a 6 and a 9. Although indeed an invariant model cannot, equivariant layers do preserve the ability to discriminate transformed versions of the same input, by e.g. representing a 9 as an upside-down 6. So by replacing the final invariant pooling layer and instead using a fully connected one, it should be possible to deal with this issue in such a network. This should be mentioned in the text, and could be evaluated experimentally.\n\nIn my review I have listed several areas for improvement, but as mentioned, overall I think this is a solid paper.", "Thank you for your feedback and your interest in our paper! We would like to clarify our wording of this statement. Admittedly various current equivariant architectures can be made into non-equivariant counterparts with additional enhancements such as additional feature layers. However such enhancements would render the equivariant architectures into non-equivariant ones, therefore our general statement that \"assumed orientation information is crucial to the predictive capability of the network (for a range of problems)\" is nevertheless accurate. \n\nAlso, as a side note only for further discussion, equivariant architectures have a particular construct to maintain equivariance (such as adding an additional dimension for SO(3) layers in S2CNN), and tend not to be most efficient for orientable tasks. ", "In the introduction you say \"[...] assumed orientation information is crucial to the predictive capability of the network [...] omnidirectional images, where images are naturally oriented by gravity [...]\".\nLet me inform you that there is a simple trick to solve this problem: add an extra input feature map that indicates the orientation of the gravitational field.\n\nIndeed if the symmetry completely broken like for the example of MNIST then you better have to give up the equivariant architecture. But for tasks when the symmetry is only partially broken like planets oriented by their axis of rotation then equivariant architectures are still relevant and the axis of rotation can be given as part of the input." ]
[ 7, -1, -1, -1, 6, 7, -1, -1 ]
[ 3, -1, -1, -1, 3, 5, -1, -1 ]
[ "iclr_2019_Bkl-43C9FQ", "rkxqEcLqsX", "r1lhQZeq37", "H1gjxtiv6X", "iclr_2019_Bkl-43C9FQ", "iclr_2019_Bkl-43C9FQ", "B1xhqVz_qQ", "iclr_2019_Bkl-43C9FQ" ]
iclr_2019_BklCusRct7
Optimal Transport Maps For Distribution Preserving Operations on Latent Spaces of Generative Models
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. However, the latent space operations commonly used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. Previous works have attempted to reduce this mismatch with heuristic modification to the operations or by changing the latent distribution and re-training models. In this paper, we propose a framework for modifying the latent space operations such that the distribution mismatch is fully eliminated. Our approach is based on optimal transport maps, which adapt the latent space operations such that they fully match the prior distribution, while minimally modifying the original operation. Our matched operations are readily obtained for the commonly used operations and distributions and require no adjustment to the training procedure.
accepted-poster-papers
This is a well-written paper that shows how to use optimal transport to perform smooth interpolation, between two random vectors sampled from the prior distribution of the latent space of a deep generative model. By encouraging the marginal of the interpolated vector to match the prior distribution, these interpolated distribution-preserving random vectors in the latent space are shown to result in better image interpolation quality for GANs. The problem is of interest to the community and the resulted solutions are simple to implement. As pointed out by Reviewer 1, the paper could be made clearly more convincing by showing that these distribution preservation operations also help perform interpolation in the latent space of VAEs, and the AC strongly encourages the authors to add these results if possible. The AC appreciates that the authors have added experiments to satisfactorily address his/her concern: "Suppose z_1,z_2 are independent, and drawn from N(\mu,\Sigma), then t z_1 + (1-t)z_2 ~ N(\mu, (t^2+(1-t)^2)\Sigma). If one lets y | z_1, z_2 ~ N(t z_1 + (1-t)z_2, (1-t^2-(1-t)^2)\Sigma) as the latent space interpolation, then marginally we have y ~ N(\mu, \Sigma). This is an extremely simple and fast procedure to make sure that the latent space interpolation y is highly related to the linear interpolation t z_1 + (1-t)z_2 but also satisfies y ~ N(\mu, \Sigma)." The AC strongly encourages the authors to add these new results into their revision, and highlight "smooth interpolation" as an important characteristic in addition to "distribution preserving." A potential suggestion is changing "Distribution Preserving Operations" in the title to "Distribution Preserving Smooth Operations."
val
[ "HklrDZXoCX", "SylZU8FPnQ", "B1e-bBgiCX", "HJlwcxljR7", "SklDZf7C2Q", "rJe1NJN6hQ" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback.\n\nWe argue that just because latent space operations do not help with GAN training, it does not mean they are not useful. Just as the reviewer suggests, they provide insights into how the trained generator works. For example, interpolations are one of the most intuitive ways to illustrate whether a model is capable of synthesizing new images (instead of just memorizing) and to visualize the learned latent space.\nMany of the most impactful papers on generative models have employed such methods, such as:\n\nAuto-Encoding Variational Bayes (Kingma and Welling, 2013)\nGenerative Adversarial Networks (Goodfellow et al. 2014)\nUnsupervised representation learning with deep convolutional generative adversarial networks (Radford et al. 2015)\nProgressive Growing of GANs for Improved Quality, Stability, and Variation (Karras et al. 2018) ( https://www.youtube.com/watch?v=G06dEcZ-QTg&feature=youtu.be&t=77 )\nLarge Scale Gan Training For High Fidelity Natural Image Synthesis (Brock et al, 2018)\n\nOur framework is agnostic to the choice of latent space (as long at is has i.i.d. components), if you change the latent space our \"matched\" operations just need to be recomputed. This choice might often be based on convenience (or convention), but for VAEs it is also important that the KL-divergence is readily computed between the encoded distribution and the latent prior (which typically motivates the choice of a gaussian latent space).\n\nWe are confused by the claim that our proposed approach is \"unsurprising\" and that \"it serves more like an explanation or validation, rather than the motivation\". Could the reviewer elaborate on this? No prior works approached this problem from this direction and our proposed approach is not at all obvious.\n\nThe point of Table 2. is not to serve as some kind of \"benchmark\" for interpolation methods, but to illustrate that the distribution mismatch of the original operations results in significant drops in Inception Scores, and that our method fully eliminates this drop (obtaining the same scores as when generating random samples). We are confused by the claim that the the differences are \"not significant\" given that the drop for the original linear operations is up to 29% whereas we fully recover the performance of the original model (when sampled randomly).\nWe discussed and compared visually with SLERP (a heuristic designed for 2-point interpolation) in Sec. 4.2, and explicitly stated that we do not provide a \"better\" method in terms of visual quality, but stress that the goal of this work is to construct operations in a principled manner, whose samples are consistent with the generative model.\nWe computed the Inception Scores for SLERP for 2-point interpolation, and the results are consistent with this, where it gives similar scores as our proposed approach: 7.89 +- 0.09, 3.68 +- 0.09, 3.90 +- 0.11 and 2.04 +- 0.04 for CIFAR-10, LLD-icon, LSUN and CelebA respectively. Furthermore, we note that 2-point interpolation is just one of the many operations our framework can be applied to.", "The paper addresses the latent space distribution mismatch in VAEs and GANs. The authors try to solve the issue by optimal transport theory and the proposed method on the latent space yields better quality in the generated samples.\n\nTo me, the motivation is not very strong. In DCGAN, amazingly, latent space linear operations can carry over to the generated images. But it’s not something people are usually concerned with in GANs. I understand that latent space operations can provide insights into how the trained generator works. But how can it improve the actual GAN training? Choosing Gaussian or uniform distribution for the latent variable is mainly for ease of computation and I am not sure if the motivation to match the distributions is very strong in GAN applications. Perhaps it more important in the context of VAEs.\n\nAt the first glance, the proposed form of transformation is not surprising. Though optimal transport is a very powerful theoretical tool, it serves more like an explanation or validation, rather than the motivation. I felt the theory part could be simper. \n\nIn the quantitative comparisons with other methods, all simulations seem to be in the context of GAN. The difference in 2-point cases (table 2) is not significant and the author only compares with linear interpolation but not SLERP. I would like to see more quantitative comparisons with other methods and also some empirical studies in the context of VAEs. ", "We thank the reviewer for the feedback.\n\nRegarding the data-generation process: we do use a model that is only trained once to generate new data.\nHowever, we observe (both theoretically and experimentally) the opposite of what you claim: even though you train on a specific distribution (say uniform in the 100 dimensional hypercube), it matters where from the support you sample. Of course, if you use the model as a \"physical process\" and sample new data with the same distribution as you used during training, you do not have a problem. However, once you start sampling the distribution in a different way (e.g. by interpolating between samples), even though you remain in the support of your distribution you start getting \"abnormal\" latent codes which your model performs poorly on. We urge the reviewer to carefully look at Figure 2. in the paper, which illustrates how different (geometrically) the interpolated samples can be compared to the endpoints, due to the high dimensionality of the space.\n\nRegarding the Inception Score, it was proposed in by Salimans et al. ( https://arxiv.org/pdf/1606.03498.pdf ), and will describe it better in the paper.\nWe do not understand your statement that \"when we use interpolated values of the training input data to generate images, the Inception Score is expected to decrease, compared to that evaluated on the training data\".\nWe are not interpolating training input data, we are interpolating random latent points during evaluation, the exact same latent points that are used when evaluating the model in its standard setting. We do not obtain improved Inception Scores compared to the original model (when sampled randomly), rather we avoid dropping in performance as happens when you linearly interpolate.", "We thank the reviewer for the feedback!\n\nRegarding the evaluation of analogy interpolations, we did not due this due to the the added complexity involved. In particular, we there is no standardized way of performing analogies in terms of how to select the examples and the difference vectors. It can be done over averages of groups of samples or over individual samples. In both cases, how the samples are produced (e.g. manually selecting them or using a conditional GAN) would also need to be taken into account. Nonetheless, we think it would be an interesting in the future to explore the application our framework for analogies - and see nothing that prevents its use in principle.\n\nRegarding the effect of the transformation on the operation: we agree that this is a valid concern. In our framework, we take the perspective that the desired output distribution is the same as the original latent distribution, and search for the minimal perturbation (in l1 distance). While a natural approach, the required (minimal) perturbation could still be large. To assess this, we intend to add to the paper the average effect of the perturbation for the experiments in Table 2 (i.e. the l1 distance between the original and modified code samples). This way, we can quantify how much the operations are impacted by the adjustment.\n\nWe agree that experiments on VAEs would well complement the paper, but we expect the results to be the same. In particular, since our approach fully eliminates the distribution mismatch, we are guaranteed to get the same sample quality as from random samples. The only question remaining (which is still interesting) is whether VAEs are more or less sensitive to the mismatch when using the unmodified operations.", "Noticing that widely used latent code interpolations for exploring the generative capabilities of VAEs and GANs have distribution mismatch problems, this paper proposes to utilize monotone transport map to exactly eliminate the distribution mismatch between modified interpolated codes and a prior distribution, assuming I.I.D. code components and a L1 code distance. More precisely, a transformation of the latent space operation is learnt with the objective that the distribution of the transformed variable match the prior distribution used in training the generative models. Optimal transport is used as a measure to minimize the two distributions. By restricting the class of cost functions used in the optimal transport formulation, the solution to the optimal transport problem (and hence the transformation function) has been shown to take a simple form (closed form in cases where cdf has a analytical form). Experiments on CIFAR-10, LLD-icon, LSUN, CelebA datasets show that, the minimally modified interpolated codes for several different interpolations produce samples with higher Inception Scores and better visual effects under an improved Wasserstein GAN than the original interpolated codes.\n\nThis paper is well written, the studied problem is highly important, and the approach presented has potentially wide applications. \n\nHowever, there are some concerns about the experimental evaluations,\n\n1. Although the quantitative evaluations for 2-point and 4-point interpolations are important, it is hard to assess these interpolations in a semantically meaningful way. Extensive quantitative (FID and IS) and qualitative evaluations should be conducted for analogy interpolations. For example, adding glasses, adding mustache, and many others. It is much easier to assess the quality of the generated images from the minimally modified interpolated code for this category in a meaningful way.\n\n2. Another concern is that how big the effect of the transformation function inducing on the latent space operations will be. For example, a linear interpolation is no longer linear after getting transformed. So, are there transformations that drastically transform the original latent space operations? In that case, will the transformed variable make any sense with respect to the original latent space operations? Extensive experiments for analogy interpolations are required to answer these questions.\n\n3. Experiments have been shown only on GAN architectures, however, the framework can be easily extended to VAEs. Experiments on VAEs will be informative.\n\nMinor:\n\nSection 1.1, in the second paragraph, (SLERP) should be moved a correct position.\n\nFigure 2: it's better to use a different color for midpoint linear other than blue\n\nProblem 1, f* ---> f*:", "This paper considers the issue of distribution mismatch between the input data used for training generative models and the new data for new instance generation. Given a sample operation, the authors propose to use the so-called optimal transport to map the distribution of the new data to that of the input data that were used training. The optimal transport is essentially a monotonic transformation as the composite of the inverse of the target distribution and the source distribution.\n\nThe paper is in general well written. However, I am concerned with two issues here, which are related to the motivation and performance evaluation, respectively. First, the authors didn't make it clear what data generation of the trained generative model suffers from the distribution mismatch issue, although there was some discussion on this in the literature, as the authors mentioned. To me, once the generative model is successfully trained, it is something like a physical process, and new data, which are contained in the support of the training data, can always be used as input to generate new data. (Personally, I think this is very different from covariate shift correction in domain adaptation, in which the correction is necessary because simpler models, instead of flexible, nonparametric ones, are used to make prediction.) Second, the authors used the Inception Score for performance evaluation. Please give this score in the paper and make its definition clear. To me, it is not surprising at all that the proposed method had a better Inception Score: roughly speaking, when we use interpolated values of the training input data to generate images, the Inception Score is expected to decrease, compared to that evaluated on the training data. Intuitively, a very high Inception Score may indicate that we are not trying to generalize, but just memorize the training input data. An explanation about this point would be highly appreciated." ]
[ -1, 5, -1, -1, 7, 5 ]
[ -1, 3, -1, -1, 5, 3 ]
[ "SylZU8FPnQ", "iclr_2019_BklCusRct7", "rJe1NJN6hQ", "SklDZf7C2Q", "iclr_2019_BklCusRct7", "iclr_2019_BklCusRct7" ]
iclr_2019_BklHpjCqKm
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.
accepted-poster-papers
The paper looks at a novel form of physics-constrained system identification for a multi-link robot, although it could also be applied more generally. The contributions is in many simple; this is seen in a good light (R1, R3) or more modestly (R2). R3 notes surprise that this hasn't been done before. Results are demonstrated on a simualted 2-dof robot and real Barrett WAM arm, better than a pure neural network modeling approach, PID control, or an analytic model. Some aspects of the writing needed to be addressed, i.e., PDE vs ODE notations. The point of biggest concern is related to positioning the work relative to other system-identification literature, where there has been an abundance of work in the robotics and control literature. There is no final consensus on this point for R3; R3 did not receive the email notification of the author's detailed reply, and notes that the author has clarified some respects, but still has concerns, and did not have time to further provide feedback on short notice. In balance, the AC believes that this kind of constrained learning of models is underexplored, and notes that the reviewers (who have considerable shared expertise in robotics-related work) believe that this is a step in the right direction and that it is surprising this type of approach has not been investigated yet. The authors have further reconciled their work with earlier sys-ID work, and can further describe how their work is situated with respect to prior art in sys-ID (as they do in their discussion comments). The AC recommends that: (a) the abstract explicitly mention "system identification" as a relevant context for the work in this paper, given that the ML audience should be (or can be) made aware of this terminology; and (b) push more of the math related to the development of the necessary derivatives to an appendix, given that the particular use of the derivations seems to be more in support of obtaining the performance necessary for online use, rather than something that cannot be accomplished with autodiff.
train
[ "HkejIJNp14", "HJe9uxpo1N", "H1lsBkAKJE", "HJg1lShYk4", "SygM8NcFRm", "BJgEjeIxR7", "SJe1dTrlCm", "S1xsD_SxA7", "BJlOB02JRm", "SJxMlEq-pm", "H1exdKrp27", "S1xd1wOo2X" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Once we can update the paper, we will make this statement clearer and include the modelling as flexible joint, i.e., as two joint coupled by a massless spring. Furthermore, we will also include that this is not possible with the Barrett WAM as one cannot sense the motor positions. \n\nThanks for bringing this to our attention.", "Neither the comments of the other reviewers nor the response of the authors gives me reason to change my evaluation.\n\nThe paper is a small but interesting idea step towards implementing a physics-prior in model learning. Future work should focus on implementing this approach for more complex system, and find out how to scale this approach.\n\nSlight remark: as pointed out in my earlier review, the cable-systems do not violate the physics prior. While I understand that it was not possible to run the suggested new experiments, the error in the text should be corrected. ", "I've discussed this paper in a reading group with colleagues (without mentioning that I was reviewing it) to get some more opinions and to discover potential flaws. \nThe general sentiment was that this method can be difficult to apply in practice, because it has stringent requirements that can be hard to meet with real, systems (e.g. a legged robot). The results show only minor improvements over PD controllers and inverse dynamics controllers, however this might be due to the simplicity of the experiments (2D robot arm). \nThat being said, the paper is certainly a step in the right direction and I'm in favor of accepting this paper. The method is sound and simple and the authors present hardware and simulation results. It's a simple framework for others to build upon.", "We are reaching the end of the discussion period.\nThere remain some mixed opinions on the paper.\n\nThe authors have provided detailed replies.\nAny further thoughts from the reviewers, in response to those?\n\nStating pros + cons and summarizing any changes in opinion would be greatly appreciated.\n\nWe acknowledge that reviewer & author time is limited.\n-- area chair", "Dear Reviewer 3, \n\nwe have added an offline comparison to the Appendix. \"Appendix A Offline Benchmarks\" compares the performance of DeLaN to the system identification approach introduced by Atkeson et. al. [1], a feed-forward neural network and the recursive Newton Euler algorithm using an analytic model. For this comparison the models as trained offline and evaluated using the mean squared error (MSE) on the training and test set. \n\nWe added this comparison to the Appendix as we think that the tracking error computed using online learning is the relevant performance indicator and not the offline MSE. We are currently running the online experiments and we will add the results of the system identification approach to the paper as soon as the results become available. \n\nWe would be very happy, if you could have another look at these results and let us know how we can further improve the paper.\n\n[1] Atkeson, C. G., An, C. H., & Hollerbach, J. M., 1986. Estimation of inertial parameters of manipulator loads and links. The International Journal of Robotics Research, 5(3), 101-119.\n", "We thank the reviewer for the extensive evaluation. We have updated the paper to precisely differentiate between PDE and ODE and we updated the related work section to explain weaknesses of previous approaches and highlight the differences to existing model learning / system identification (SI) approaches. \n\nIn addition, we want to clarify the brought-up points below. If you have further questions, please feel free to ask. \n\n1)\nSI as described in the textbooks or the survey by Wu et. al. [1] - which is btw. missing key references to state-of-the-art methods such as [2, 3] - is non-trivial and hard for real robots. Our lab has significant experience performing model learning on several robot arms, legged robots and robotic hands. However, using state of the art SI [2,3], we learned dynamics parameters, that did NOT outperform the analytical model of the WAM. Therefore, we only use the analytical model as baseline within the paper. Furthermore, we did evaluate our approach against standard black-box SI methods, as the feed-forward neural network is a standard SI technique, which is also mentioned by Wu. et. al. [1].\n\nIn addition, we disagree with your statement, that we did not put our work in the proper context. We related our approach to the extensive research covering model learning. Model learning is much broader than SI, as SI is commonly used to refer to model learning with known basis functions. Therefore, we do not limit our comparison to SI but provide a wider context with model learning. Furthermore, the classic SI described by Atkeson et. al. [4] has many limitations (e.g. not applicable to closed-loop kinematics) and will most likely not infer the actual dynamics parameters. As pointed out by Ting et. al. [2] and Nakanishi et. al. [3], the inferred parameters are not guaranteed to yield a positive definite inertia matrix or satisfy the parallel axis theorem - both aspects are ignored within the proposed survey [1] -. In contrast DeLan is guaranteed to yield a physical plausible model, can be applied to any kinematic structure and does not require any knowledge about kinematics.\n\nI hope we could clarify the problems of standard SI and the consequences for real robot models. Furthermore, we are working to provide empirical data. However, we need to re-implement the features derived by the Newton-Euler formulation as our underlying robotics libraries changed and these features require the computation for all transformations, Jacobians along the complete kinematic chain. If you are aware of a public implementation using URDFs as robot descriptors, please let us know.\n\n2) \nFirst, we think that reporting the derivatives is good scientific practice and second the analytic computation of the derivatives is necessary for the real-time application. The usage of automatic differentiation in PyTorch does not allow the computation of the feedforward torque with 200Hz. As pointed out in these discussions (https://discuss.pytorch.org/t/how-to-compute-jacobian-matrix-in-pytorch/14968/7, https://stackoverflow.com/questions/43451125/pytorch-what-are-the-gradient-arguments/47026836) the computation of the partial derivatives w.r.t. to network input does not scale well to high-dimensions. If you would prefer to have these derivations within the Appendix, we can also put the derivations within the Appendix.\n\n3) \nSorry for being imprecise with the PDE notation. We have updated the paper to be more precise when the equations are referring to a PDE or an ODE. \n\nWe also want to point out that Eq. 4 is NOT \"just the standard manipulator equations\", Eq. 4 applies to any non-relativistic multi-particle system, which can be described with holonomic constraints. Therefore, the Lagrangian Mechanics formalism is applicable for closed-loop kinematic chains, where the standard Newton-Euler approaches fail. Furthermore, most literature related to manipulator equations ignores the functional dependency between C and H, while DeLan explicitly models this functional dependency. We have updated the description to clarify the differences.\n\n[1]Wu, J., Wang, J. and You, Z., 2010. An overview of dynamic parameter identification of robots. Robotics and computer-integrated manufacturing, 26(5), pp.414-419.\n\n[2] Ting, J. A., Mistry, M., Peters, J., Schaal, S., & Nakanishi, J., 2006. A Bayesian Approach to Nonlinear Parameter Identification for Rigid Body Dynamics. In Robotics: Science and Systems, pp. 32-39.\n\n[3] Nakanishi, J., Cory, R., Mistry, M., Peters, J. and Schaal, S., 2008. Operational space control: A theoretical and empirical comparison. The International Journal of Robotics Research, 27(6), pp.737-757.\n\n[4] Atkeson, C. G., An, C. H., & Hollerbach, J. M., 1986. Estimation of inertial parameters of manipulator loads and links. The International Journal of Robotics Research, 5(3), 101-119.", "Thank you for your extensive review. Your question regarding closed-loop kinematics chains sparked interesting discussions yielding additional advantages of the approach. We have fixed the issues you mentioned in the figures.\n\n1) \nYes, there are no constraints on the decomposition of the torque. This decomposition is unsupervised and could yield degenerate solutions. From our experience, degenerate solutions are learned if one of the components - either H or g - dominates during initialisation. Tuning the hyperparameters for the initialization, i.e., the variance of the gaussian distribution initializing the weights, one achieves a good decomposition into g and H. \n\nTo incorporate external forces, one has two options. First, conservative forces, e.g., joints coupled by springs, can be added to V, i.e., V = V_g + V_p. If the external forces are non-conservative, e.g., contact forces, one must decompose \\tau. Commonly \\tau is decomposed into \\tau = \\tau_{friction} + \\tau_{actuator} + \\tau_{external}. The external forces must be projected to the generalized coordinates using \\tau_{external} = J_p^{T} f, where f are the external forces acting on point p and J_p the jacobian. \n\n\n2) \nThis depends on the exact definition of partial observability (PO):\n\n- If one interprets PO as observing the state with noise and no direct sensing of the accelerations, DeLan can learn the dynamics using noisy observations and approximated accelerations using finite differences. \n\n- If one interprets PO as missing sensor measurements of a single generalized coordinate, DeLan will not be able of the learning the dynamics. Furthermore, such partial observability would violate the underlying assumption of Lagrangian Mechanics as the system input does not represent generalized coordinates. \n\n- If one interprets PO as an over constrained observation, i.e. a high-dimensional signal that encodes the low-dimensional state, one could learn a latent space embedding, whereas dynamics in the latent space are described by Lagrangian mechanics. \n\n\n3) \nAs the Euler-Lagrange equation (Eq. 3) applies to vibrations and soft robotics, where the state dimensionality is not finite. One could apply an extension of the current approach to soft robotics. We represent the kinetic energy as T = 1/2 \\dot{q} H(q) \\dot{q}, which applies to system with finite particles. For soft robotics, one would need to represent the kinetic energy as continuous function. Therefore, the Lagrange Euler equation would not simplify to an ODE and one would need to incorporate the PDE. We are currently exploring this direction and don't see any structural problems. \n\n\n4)\nThank you for bringing up the different kinematic structures. The problems of closed-loop kinematics are mainly due to use of the Newton-Euler formalism. In contrast, the Lagrangian Mechanics formalism applies to any non-relativistic multi-particle system with holonomic constraints. As closed-loop kinematics only require holonomic constraints, learning the dynamics of closed-loop kinematics can be achieved with DeLan. Older works [1, 2, 3] used Lagrangian Mechanics to manually derive the dynamics of closed-loop kinematics. \n\nCurrently, we are looking for publicly available model files of parallel robots (*.sdf or *.urdf) and try to include the evaluations. Up to now, we were not able to find such robot description. If you are aware of such models, we would appreciate your help. \n\nRegarding the contact dynamics. If one can observe the contact force and the point of contact, one can include the contact forces within the learning (see point (1)). If neither is known the learning would be too ambiguous. However, if one has learned the contact-free dynamics, one can compute the external forces on the end-effector and perform force-control without additional sensors. Using system identification this sensorless force control has been done by Wahrburg et. al. [4]. \n\n\n5) \nYes, we are definitely planning on exploring this approach in future work. We want to use the forward model for planning and compare the performance to black-box model learning. When using the forward model, we will compare to the recent work from Deepmind and other authors. \n\n\n[1] Miller, K., 1992. The Lagrange-based model of Delta-4 robot dynamics. Robotersysteme, 8, pp.49-54.\n\n[2] Liu, K., Lewis, F., Lebret, G., & Taylor, D., 1993. The singularities and dynamics of a Stewart platform manipulator. Journal of Intelligent and Robotic Systems, 8(3), pp.287-308.\n\n[3] Geng, Z., Haynes, L.S., Lee, J.D. and Carroll, R.L., 1992. On the dynamic model and kinematic analysis of a class of Stewart platforms. Robotics and autonomous systems, 9(4), pp.237-254.\n \n[4] Wahrburg, A., Bös, J., Listmann, K. D., Dai, F., Matthias, B., & Ding, H., 2018. Motor-Current-Based estimation of cartesian contact forces and torques for robotic manipulators and its application to force control. IEEE Transactions on Automation Science and Engineering, 15(2), pp.879-886.", "Thank you for providing such extensive review and raising these important questions. If you have further questions, please feel free to ask.\n\n1) \nSorry that we have been imprecise on the naming convention. We have updated the paper to make the differences clearer. We replaced Lagrange-Euler PDE with Lagrange-Euler equation and highlight that Eq. 3 can be either a PDE or ODE while Eq. 4 is an ODE. After, Eq. 4 we removed the term PDE. Within the related work section, we use the PDE terminology if the references refer to PDEs. \n\n\n2) \nThank you for bringing up this point as this sparked further discussions and new research questions. Until now we have learned the potential forces g(q) directly as this is standard in robotic applications and for our experiments U(q) is not required. However, if one learns dU/dq and U simultaneously one could extend the cost-function with energy conservation and derive energy-based controllers. \n\nA quick offline verification on the simulated WAM data showed that learning dU/dq is possible but currently achieves lower performance. Right now, we cannot conclude if this lower performance is due to hyperparameter settings. Therefore, we are running hyperparameter sweeps to compare the differences. \n\n\n3) \nYes, one could just incorporate \\dot{q} within g and let g model a mixture of gravity and friction. However, this would contradict Lagrangian Mechanics, where g is the derivative of a E_pot. We would add friction by decomposing \\tau into the subparts \\tau = \\tau_{motor} + \\tau_{external} + \\tau_{friction}. Simple friction models can be added using the Rayleigh-dissipation function (https://en.wikipedia.org/wiki/Rayleigh_dissipation_function). However, as described by Albu-Schäfer [1] a \"good\" friction model for robots is described by \\tau_{friction} = f(\\q, \\qd, \\tau). Adding such friction model to Lagrangian mechanics is non-trivial. Especially, due to the torque dependency, the computation of the inverse dynamics is challenging. Therefore, incorporating friction would require answering the questions what a sufficiently good friction model is and require an extensive empirical comparison of multiple friction models, which would beyond the scope of this paper.\n\n\n4) \nYes, thank you for bringing this to our attention. We agree that performing such experiments would be interesting. Especially for robots with adaptive stiffness as by Braun et. al. [2]. The Barrett WAM does not provide separate motor (\\theta) and joint (q) positions. We are looking into performing such experiments in simulation. Below we discuss the theoretic complexity, practical relevance for the performed experiments and implementation difficulties. \n\nFrom a theoretical perspective including this within the learning should not be too difficult. Rather than learning f^{-1}(q, qd, qdd) = \\tau one would learn f^{-1}(q, qd, qdd) - K (\\theta - q) = 0 where K is a diagonal matrix with positive entries. The structure of DeLan could be easily adapted for this. We will try to also show this in a small experiment. \n\nFrom a practical perspective using this model within the controller is more complex. As described in Equation 13.16 in the Springer Handbook of Robotics [3], the inverse model contains d^3 H/d^3t, d^4q/d^4t, d^2g/d^2t, d^2 (d(qd^T H qd)/ dq )/d^2t etc.. Therefore, one would need to compute the higher-order derivatives, which will cause numerical issues that in our opinion would do more harm than help. However, we definitely agree that such model would help when planning using forward model.\n\nFrom the simulation perspective, we are using PyBullet. To the best of our knowledge, one cannot simulate coupled joints with Pybullet. Therefore, one has two options. First, simulate the spring outside of PyBullet but this would risk the divergence of the integration of \\qdd (PyBullet) and \\theta_dd (non-PyBullet). Second one could replace PyBullet with MuJoCo, which can simulate coupled joints but this would require significant implementation effort. Currently, we are evaluating the effort of both. \n\n\n5) \nYes, one could add this soft constraint as penalty term. However, the computational overhead of the derivatives is minimal. The derivative computation (Section 4.2 & 4.3) only requires one clamping operation (for the ReLU non-linearity) and one matrix multiplication per hidden layer. This overhead does not hinder the real-time computations and hence, we prefer to use the hard constraint rather than the soft-constraint.\n\n\n[1] Alin Albu-Schäffer. Regelung von Robotern mit elastischen Gelenken am Beispiel der DLRLeichtbauarme. PhD thesis, Technische Universität München, 2002.\n\n[2] Braun, D. J., Howard, M., & Vijayakumar, S., 2012). Exploiting variable stiffness in explosive movement tasks. Robotics: Science and Systems VII, 25.\n\n[3] Siciliano, B., & Khatib, O. (Eds.)., 2016. Springer Handbook of Robotics. Springer.", "Thank you to the reviewers for the extensive reviews.\nAuthors: We welcome a response.\nReviewers: We do have high variation in the overall paper evaluation, as reflected in the assigned scores. Do the other reviews change your evaluation of the paper?\n\n-- your area chair", "This paper discusses learning of robot dynamics models. They propose to learn the mass-matrix\nand the potential forces, which together describe the Lagrangian mechanics of the robot. The unknown\nterms are parametrized as a deep neural network, with some properties (such as positive definiteness)\nhard-coded in the network structure. The experimental results show the learned inverse model being used\nas the feed-forward term for controlling a physical robot. The results show that this approach lead to faster\nlearning, as long as the model accurately describes the system. The paper is well written and seems free\nof technical errors. The contribution is modest, but relevant, and could be a basis for further research. Below\nare a few points that could be improved:\n\n1) The paper uses the term partial differential equation in a non-standard way. While Eqs. 4/5 contain partial derivatives,\nthe unknown function is q, which is a function of time only. Therefore, the Lagrangian mechanics of robot arms are seen\nas ordinary differential equations. The current use of the PDE terms should be clarified, or removed.\n2) It is not made clear why the potential forces are learned directly, rather than as a derivative of the potential energy. Could you discuss the advantages/disadvantages? \n3) Somewhat related to the previous point: the paper presents learning of dissipative terms as a challenge for future works. Given that the formulation directly allows to add \\dot{q} as a variable in g, it seems like a trivial extension. Can you make clearer why this was not done in this paper (yet).\n4) The results on the physical robot arm state that the model cannot capture the cable dynamics, due to being a rigid body model. However, the formulation would allow modelling the cables as (non-linear) massless springs, which would probably\nexplain a large portion of the inaccuracies. I strongly suggest running additional experiments in which the actuator and joints have a separate position, and are connected by springs. If separate measurements of joint-position and actuator position are not available on the arm, it would still be interesting to perform the experiments in simulation, and compare the\nperformance on hardware with the analytical model that includes such springs.\n5) The choice is made to completely hardcode various properties of the mass matrix into the network structure. It would be possible to make some of these properties softcoded. For instance, the convective term C(q,\\dot{q})\\dot{q} could be learned separately, with the property C + C^T = \\dot{H} encoded as a soft constraint. This would reduce the demand on computing derivatives online.", "This paper looks at system identification for a multi-link robot based upon combining a neural network with the manipulator equations. Specifically, the authors propose to model the robot dynamics using the typical manipulator equations, but have a deep neural network parameterize the H(q) and g(q) matrices. They illustrate that the method can control the systems of a simualted 2-dof robot and real Barrett WAM arm, better than a pure neural network modeling approach, PID control, or an analytic model.\n\nOverall, I think there is a genuinely nice application in this paper, but it's not sufficiently compared to existing approaches nor put in the proper context. There is a lot of language in the paper about encoding the prior via a PDE, but really what the authors are doing is quite simple: they are doing system identification under the standard robot manipulator equations but using a deep network to model the inertia tensor H(q) and the gravity term g(q). Learning the parameters that make up H(q) and g(q) is completely standard system identification in robotics, but it's interesting to encode these as a generic deep network (I'm somewhat surprised this hasn't been done before, though a quick search didn't turn up any obvious candidates). However, given this setting, there are several major issues with the presentation and evaluation, which make the paper unsuitable in its current from.\n\n1) Given the fact that the authors are really just in the domain of system identification and control, there are _many_ approaches that they should compare to. At the very least, however, the authors should compare to standard system identification techniques (see e.g., Wu et al., \"An overview of dynamic parameter identification of robots\", 2010, and references therein). This is especially important on the real robot case, where the authors correctly mention that the WAM arm cannot be expressed exactly by the manipulator equations; this makes it all the more important to try identify system parameters via a data-driven approach, not with the hope of finding the exactly \"correct\" manipulator equations, but with finding some that are good enough to outperform the \"analytical\" model that the authors mention. It's initially non-obvious to me that a generic neural network to model the H and g terms would do any better than some of these standard approaches.\n\n2) A lot of the derivations in the text are frankly unnecessary. Any standard automatic differentiation toolkit will be able to compute all the necessary derivatives, and for a paper such as this the authors can simply specify the architecture of the system (that they use a Cholesky factorization representation of H, with diagonals required to be strictly positive) and let everything else be handled by TensorFlow, or PyTorch, etc. The derivations in Sections 4.2 and 4.3 aren't needed.\n\n3) The authors keep referring to the Lagrangian equations as a PDE, and while this is true in general for the actual form here it's just a second order ODE; see e.g. https://en.wikipedia.org/wiki/Lagrangian_mechanics. Moreover, these are really just the standard manipulator equations for multi-link systems, and can just be denoted as such.\n\nDespite these drawbacks, I really do like the overall idea of the approach presented here, it's just that the authors would need to _substantially_ revise the presentation and experiments in order to make this a compelling paper. Specifically, if they simply present the method as a system identification approach for the manipulator equations, with the key terms parameterized by a deep network (and compare to relevant system identification approaches), I think the results here would be interesting, even if they would probably be more interesting to a robotics audience rather than a core ML audience. But as it is, the paper really doesn't situation this work within the proper context, making it quite difficult to assess its importance or significance.", "I like the simplicity of the approach in this paper (especially compared to very computationally hungry methods such as Deepmind's \"Graph Networks as Learnable Physics Engines for Inference and Control\"). The fact that the approach allows for online learning is also interesting. I very much appreciate that you tested your approach on a real robot arm!\n\nI have a number of questions, which I believe could help strengthen this paper:\n- The decomposition of H into L^TL ensures H is positive definite, however there are no constraints on g (gravity/external forces). How do you ensure the model doesn't degenerate into only using g and ignoring H? In the current formulation g only depends on q, however this seems insufficient to model velocity dependent external forces (e.g. contact dynamics). Please elaborate.\n- How would you handle partial observability of states? Have you tried this? \n- How would you extend this approach to soft robots or robots for which the dimensionality of the state space is unknown?\n- Have you tested your method on systems that are not kinematic chains? How would complex contact dynamics be handled (e.g. legged robots)?\n- It would be interesting to see more comparisons with recent work (e.g. Deepmind's).\n\nSome figures (e.g. Figure 6) are missing units on the axes. Please fix." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "HJe9uxpo1N", "HJg1lShYk4", "HJg1lShYk4", "iclr_2019_BklHpjCqKm", "BJgEjeIxR7", "H1exdKrp27", "S1xd1wOo2X", "SJxMlEq-pm", "iclr_2019_BklHpjCqKm", "iclr_2019_BklHpjCqKm", "iclr_2019_BklHpjCqKm", "iclr_2019_BklHpjCqKm" ]
iclr_2019_BklMjsRqY7
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence. The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices. This imposes an upper-bound on the reduction of complexity of multiply-accumulate units. We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training. Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation. We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet. In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline. We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds. Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems.
accepted-poster-papers
The authors present a theoretical and practical study on low-precision training of neural networks. They introduce the notion of variance retention ratio (VRR) that determines the accumulation bit-width for precise tailoring of computation hardware. Empirically, the authors show that their theoretical result extends to practical implementation in three standard benchmarks. A criticism of the paper has been certain hyperparameters that a reviewer found to be chosen rather arbitrarily, but I think the reviewers do a reasonable job in rebutting it. Overall, there is consensus that the paper presents an interesting framework and does both practical and empirical analysis, and it should be accepted.
train
[ "SJeEW_UM37", "HygrtjgXAQ", "SyesPogQ0m", "B1lLrixm0m", "SyxIBxbtam", "ByxaQebt6Q", "BJgdyxbYpQ", "r1xM6yWFpX", "HJgqZjSA2Q", "H1lrdaUqnm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "There has been a lot of work on limited precision training and inference for deep learning hardware, but in most of this work, the accumulators for the multiply-and-add (FMA) operations that occur for inner products are chosen conservatively or treated as having unlimited precision. The authors address this with an analytical method to predict the number of mantissa bits needed for partial summations during the forward, delta and gradient computation ops for convolutional and fully connected layers. They propose an information theoretic approach to argue that by using fewer bits of mantissa in the accumulator than necessary, the variance of the resulting sum is less than what it would have been if sufficient bits of mantissa were used. This is surprising to me, as quantization is usually modeled as _adding_ noise, leading to an _increase_ in variance (Mc Kinstry et al. 2018), so this is a nice counterexample to that intuition. Unfortunately the result is presented in a way that implies the variance reduction is what causes the degradation in performance, while obviously (?) it's just a symptom of a deeper problem. E.g., adding noise or multiplying by a constant to get the variance to where it should be, will not help the network converge. The variance is just a proxy for lost information. The authors should make this more clear.\n\nLoss of variance is regarded as a proxy to the error induced/loss of information due to reduced mantissa prevision. The authors present their metric called Variance Retention Ratio (VRR) as a function of the mantissa length of product terms, partial sum (accumulator) terms, and the length of the accumulation. Thereafter, the mantissa precision of the accumulator is predicted to maintain the error of accumulation within bounds by keeping the VRR as close to 1 as possible. The authors use their derived formula for VRR to predict the minimum mantissa precision needed for accumulators for three well known networks: AlexNet, ResNet 32 and ResNet 18. For tightness analysis they present convergence results while perturbing the mantissa bits to less than those predicted by their formula, and show that it leads to more than 0.5% loss in the final test error of the network.\n\nSome questions that the manuscript leaves open in it's current form:\n\n0. Does this analysis only apply to ReLu networks where all the accumulated terms are positive? Would a tanh nonlinearity, e.g. in an RNN, result in a different kind of swamping behavior? I don't expect the authors to add a full analysis for the RNN case if it's indeed different, but it would be nice to comment on it. \n1. Do the authors assume that the gradients and deltas will always be within the exponent range of representation? I do not find a mention of this in the paper. In other words, are techniques like loss scaling, etc. needed in addition? Other studies in literature analyzing IEEE fp16 seem to suggest so.\n2. The authors do not provide details on how they actually performed the experiments when running convergence experiments. It is not straightforward to change the bit width of the accumulator mantissa in CPU or GPU kernel libraries such as CUDNN or Intel MKL. So how do they model this?\n3. On page 7, the authors point out that they provide a theoretical justification of why the chunk size should neither be too small or too large - but I do not see such a justification in the paper. More detailed explanation is needed.\n\nThere are a few minor typos at a few places, e.g.\n \n1. Page 4: “… , there is a an accumulation length….”\n2. Page 6: “…floaintg-point format…\"\n\nSome figures, notably 2 and 5, use text that is unreadably small in the captions. I know this is becoming somewhat common practice in conference submissions with strict pages limits, but I implore the authors to consider shaving off space somewhere else. Some of us still read on paper, or don't have the best eyes!", "Dear AnonReviewer2,\n\nThis is a note to let you know that we have uploaded our revision. To make it easy for you to track changes, we have typed all modifications with respect to the original manuscript in blue color. \n\nIn particular, at the end of Section 3 (Page 4), we have added a small paragraph to highlight even more how our study is different from mainstream research on neural network quantization. We included a short discussion on RNNs in the conclusion, as per your request. We added mentions to the technique of loss scaling used, as well as some details of the partial sum rounding implementation, both at the start of Section 5 (Page 7). Finally, we added a sentence to supplement our explanation of the choice of chunk size in the final paragraph of Section 4 (Page 7) as per our original reply to your review.\n\nWe thank you again for your nice review!\n", "Dear AnonReviewer1,\n\nThis is a note to let you know that we have uploaded our revision. To make it easy for you to track changes, we have typed all modifications with respect to the original manuscript in blue color. \n\nIn particular, at the end of Section 3 (Page 4), we have added a small paragraph to highlight even more how our study is different from mainstream research on neural network quantization. Furthermore, at the start of Section 5 (Page 7), we include a sentence motivating the choice of the benchmarks.\n\nWe thank you again for your nice review!", "Dear AnonReviewer3,\n\nThis is a note to let you know that we have uploaded our revision. To make it easy for you to track changes, we have typed all modifications with respect to the original manuscript in the blue color. \nIn particular, we have further motivated the choice of the two numbers of v(n) and accuracy cut-off on pages 6 and 8, respectively.\n\nWe thank you again for your nice review!", "\n- Reply to question 0:\n\n-- In our derivation, we do not require ReLU outputs (i.e., no such assumption is made). In addition, one of the three GEMM accumulations, specifically BWD, does not involve activations (ReLU outputs); it involves activation gradients and weights. Even the other two accumulations (FWD and GRAD) accumulate elementwise products of activations and weights/activation gradients. Thus, we never deal with the case of an accumulation where all terms are positive. We do not expect a tanh non-linearity to exhibit any different behavior. \nIn case of RNN, we validated our theory with a simple example, PTB for language model. In case of PTB, the swamping issue was not severe, mainly due to its short accumulation length (i.e., for PTB-medium (minibatch=20, timestep=35, the accumulation length for FWD,BWD,WGRAD are only 650, 2600, and 700, respectively). Thus, for this network topology, VRR predicted 5-bit for accumulation (with chunking) which is much smaller than one used for CIFAR10. We demonstrated successful convergence using this precision of accumulation, as shown in this anonymous link: https://www.dropbox.com/s/yntvzhnvso64z29/ptb.pdf?dl=0\n\nIt is to be noted that, a more difficult task, say WMT, typically requires a much larger batch size of a few thousand, so that the GRAD accumulation length could reach a million. This would make such example very interesting and relevant to our work. Unfortunately, we currently do not have a working setup for training an LSTM on such a task. As we mentioned in the conclusion, this is definitely a topic of our future work. Nevertheless, we will include comments on this issue in our revised version as per your request.\n\n\n- Reply to question 1:\n\n-- Yes, we do make this assumption and solely analyze the mantissa precision requirements. This is mentioned at the start of Section 4. In addition, we do use the technique of loss scaling in our experiments. We just realized that in our submitted draft, we somehow omitted to mention that. We are using the same technique of loss scaling as Micikevicius et al. (2017) and Wang et al. (2018) for the (1,5,2) representation and should have mentioned that at the start of Section 5. Thanks a lot for this question! We will add this mention in the revised version.\n\n\n- Reply to question 2:\n\n-- Very good point. We have used an in-house library to perform the experiments and we could not shed too much light on those details. However, we can answer with the crucial part needed to reproduce the results, which in principle is applicable to any deep learning framework. The key is to modify the CUDA code of the GEMM function. In particular, there is a for loop where the partial sum accumulation occurs. There, we add a call to a custom rounding function (which quantizes the partial sum to the desired reduced precision floating-point representation). We will add an explanation such as this in the revised version. \n\n\n- Reply to question 3:\n\n-- The justification is actually mentioned in the last paragraph of Section 4 when discussing Fig. 5 (c). Indeed, the “plateaus” in the VRR curves as a function of chunk size indicate that a specific value of chunk size is not of great importance as long as it is neither too small nor too large since in the extreme cases the VRR does drop. One intuition we can provide further is that, in chunk-based accumulation, there are two sources of swamping errors: swamping in the inter-chunk accumulation and swamping the intra-chunk accumulations. If the chunk size is too small, then the inter-chunk accumulation is very similar to the original accumulation; while a very large chunk size makes the intra-chunk accumulations similar to the original one. In either case, one of the two types of accumulations would suffer the same fate as the original accumulation.\n\n\nFinally, thanks for your minor comments, we will correct the typos and try to magnify the plots as much as possible. We sympathize with your comments, we also like to read on paper.", "Dear AnonReviewer2,\n\nThank you very much for thorough review and detailed comments! We are revising our draft and will address your concerns and suggestions. In this reply, we wish to provide answers to some of your comments:\n\n\n- On the symptoms of quantization and ‘increase vs decrease’ in variance:\n\n-- Our work actually does not contradict prior findings. Indeed, prior arts have considered representation quantization, that is to say, reducing the precision of weights/activations/gradients. It is reasonable and common to model such phenomenon by an additive noise, which statistically increases the variance. What we are looking at in our work is intermediate roundings in partial sums during the accumulation. Indeed, in our work, we fixed the precision of the representations and only reduced the precision of the accumulators for partial sum accumulation. Unlike representation quantization error, the accumulation error is dominated by swamping error.\nBasic statistics tell us that in a dot product (or a sum in general), the addition of terms causes the variance of the result to increase (with the implied assumption that the terms being added are independent (He et al., 2016)). Due to the rounding of partial sums, parts of the addition are swamped away, preventing the variance of the result to grow as expected (as illustrated in Fig. 3). \nThus, as you have correctly pointed out, we have used the loss of variance metric as a proxy to loss of information due to reduced mantissa precision. Our reasoning is that, should this loss of info be prevented in each dot product (by assigning enough precision), then we may expect the training behavior to be similar to that of the baseline. \nWe will sharpen the above messages in our revised version.\n\n\n", "Dear AnonReviewer1,\n\nThank you very much for your nice review! We are revising our draft to take into account all comments and suggestions. In this reply, we wish to provide a response to some of your comments:\n\n\n- Reply to the comment on originality and significance:\n\n-- We just want to mention that, indeed, there is a large body of work addressing the general problem of quantization and reduced precision in deep learning. Almost all of these works solely focus on the issue of representation quantization (i.e., reducing the precision of weights and/or activations). To this day, the precision of partial sums in accumulations has been largely overlooked. Hence, this is still an unanswered question, and as described in our introduction (third paragraph and Fig. 1 (b)), an important one to address in order to scale down the hardware complexity of deep learning systems. This constitutes the thesis of our work and is why we have claimed that no such work has been done before, and why our paper is of great significance. Of course, that, in addition to the statistical analysis, which by itself is novel. \n\n\n- Reply to first minor comment:\n\n-- We also noticed the peculiar pattern around m_acc = 12 & 13 in Fig. 5 (a). Observe that this curve is obtained by evaluation of eq. (2) in Theorem 1, which is clearly a non-linear function of m_acc. We can actually give you a more elaborated answer. We have plotted v(n) as a function of m_acc for several fixed values of n (as in our paper, we used a value of m_p=5). Please check out the plot at this anonymous link: https://www.dropbox.com/s/au9h9v650dvhyxw/variance_lost_fixed_n.pdf?dl=0\nThese plots are for illustrative purpose, note that a fractional value of m_acc has no physical meaning. As we can see, the general trend of variance lost decreasing as a function of m_acc is present (which is expected). However, there is a ‘lobe’ in each curve. This is because there are two sources of errors modeled by eq. (2): full swamping errors (contributing to the equation via the first term in the numerator), and partial swamping errors (contributing to the equation via the second term in the numerator). Thus, there is a trade-off between these two sources of errors: in some regime, full swamping would dominate (when m_acc is much higher than m_p) while in another regime partial swamping would dominate (when m_acc is not much higher than m_p). This trade-off causes the ‘bumps’ or ‘lobes’ observed in our linked plot. The effects seem most dramatic for values of m_acc around 12 and 13 which explains the pattern observed in Fig. 5 (a).\n\n\n-Reply to second minor comment:\n\n-- There are two reasons why we have selected our benchmarks. First, the datasets and networks are widely used and popular in such applications. Second, such image datasets and associated convolutional networks presents very large accumulation lengths due to the data size and network topology. This makes them very good candidates against which we can verify our work.\n", "Dear AnonReviewer3,\n\nThank you very much for your review and thoughtful comments. Those will be addressed as we prepare our revised draft. In the meantime, we would like to provide a first response:\n\n\n- Response to point 1)\n\n-- In Fig 5 (a&b), the variance lost rapidly increases when v(n)>50 and n increases. On the other hand, when v(n)<50 and n decreases, the variance lost quickly drops to zero. As such, v(n)=50 coincides with the ‘knee’ of the variance lost curve as a function of accumulation length and was therefore chosen as stability cutoff. Thus, this choice of cutoff was chosen purely based on the accumulation length and precision, independently from the benchmarks we have used.\n\n\n- Response to point 2)\n\n-- It is believed that there is an error bound for neural networks by changing the random number seeds. In ref [a], table 1 shows that the random seed effect could achieve 0.56% in ImageNet and in ref [b], page 5, the difference of random seed effect can be 0.44% in CIFAR100. In our experiments, we also observed the a variability of ~0.5% in accuracy due to random seed variation on CIFAR-10 and ImageNet. In this paper, we used this well-accepted value for comparison to a baseline. In addition, choosing this value is mainly for illustration purpose and won’t change the conclusion of our work. For example, in Fig 6 (d), using the predicted precision assignment, the converged test error can be clearly seen to be very close to the baseline, but increases significantly when the precision is further reduced.\n\n\nWe will motivate the choice of these two numbers more explicitly in the revised draft. We do want to let you know that we appreciate your comment, and we also believe a story is better told when specific numbers do not ‘cloud’ the picture. Hopefully our justification above makes it more convincing.\n\n\nThanks again for your review!\n\n\n[a] Goyal et al. (2018), Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour - https://arxiv.org/pdf/1706.02677.pdf\n[b] Gastaldi (2017), Shake-Shake regularization - https://arxiv.org/pdf/1705.07485.pdf\n\n", "The authors conduct a thorough analysis of the numeric precision required for the accumulation operations in neural network training. The analysis is based on Variance Retention Ratio (VRR), and authors show the theoretical impact of reducing the number of bits in the floating point accumulator. And through extensive benchmarks with popular vision models, the authors demonstrate the practical performance of their theoretical analysis.\n\nThere are several points that I am not particularly clear about this work:\n\n1) In section 4.4, the authors claim to use v(n) < 50 as the cutoff of suitability. This is somewhat arbitrary. As one can imagine, for an arbitrary model of VRR, we can find an empirical cutoff that seems to match benchmarks tightly. Or put it another way, this is a hyperparameter that the authors can tune to match their chosen benchmarks. It would be more interesting to see a detailed study on this cutoff on multiple datasets.\n\n2) Again the 0.5% accuracy cutoff from baseline in the experiment section is also another similar hyperparameter.\n\nIt would be more convincing if we can see a fuller picture of the training dynamics without these two hyperparameters clouding the big picture.\n\nHaving said this, I appreciate the authors' effort in formally studying this problem.", "Quality and clarity:\nThe paper presents a theoretical framework and method to determine the necessary number of bits in a deep learning networks. The framework predicts the smallest number of bits necessary in the (multiply-add) calculations (forward propagation, backward propagation, and gradient calculation) in order to keep the precision at an acceptable level. \n\nThe statistical properties of the floating-point calculations form the basis for the approach, and expressions are derived to calculated the smallest number of bits based on, e.g., the length of the dot product and the number variance. \n\nThe paper seems theoretically correct, although I haven't studied the appendices in detail. The experimental part is good, using three networks of various sizes (CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet) as benchmarks. The experimental results support the theoretical predictions. \n\nOriginality and significance:\nThe paper seems original, at least the authors claim that no such work has been done before, despite the large amount work done on weight quantization, bit reduction techniques, etc. The paper may have some significance, since most earlier papers have not considered the statistical properties of the reduced precision calculations.\n\nPros:\n* Interesting topic\n* Theoretical predictions match the practical experiments\n\nCons:\n* Nothing particular\n\nMinor:\n* Fig 5a. The curve for m_acc = 13 does not seem to follow the same pattern as the other curves. Why?\n* Motivate why you have selected the networks that you have in the evaluation.\n" ]
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_BklMjsRqY7", "SJeEW_UM37", "H1lrdaUqnm", "HJgqZjSA2Q", "ByxaQebt6Q", "SJeEW_UM37", "H1lrdaUqnm", "HJgqZjSA2Q", "iclr_2019_BklMjsRqY7", "iclr_2019_BklMjsRqY7" ]
iclr_2019_Bklfsi0cKm
Deep Convolutional Networks as shallow Gaussian Processes
We show that the output of a (residual) CNN with an appropriate prior over the weights and biases is a GP in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GP with a comparable number of parameters.
accepted-poster-papers
This paper builds on a promising line of literature developing connections between Gaussian processes and deep neural networks. Viewing one model under the lens of (the infinite limit of) another can lead to neat new insights and algorithms. In this case the authors develop a connection between convolutional networks and Gaussian processes with a particular kind of kernel. The reviews were quite mixed with one champion and two just below borderline. The reviewers all believed the paper had contributions which would be interesting to the community (such as R1: "the paper presents a novel efficient way to compute the convolutional kernel, which I believe has merits on its own" and R2: "I really like the idea of authors that kernels based on convolutional networks might be more practical compared to the ones based on fully connected networks"). All the reviewers found the contribution of the covariance function to be novel and exciting. Some cited weaknesses of the paper were that the authors didn't analyze the uncertainty from the model (arguably the reasoning for adopting a Bayesian treatment), novelty in appealing to the central limit theorem to arrive at the connection, and scalability of the model. In the review process it also became apparent that there was another paper with a substantially similar contribution. The decision for this paper was calibrated accordingly with that work. Weighing the strengths and weaknesses of the paper and taking into account a reviewer willing to champion the work it seems there is enough novel contribution and interest in the work to justify acceptance. The authors provided responses to the reviewer concerns including calibration plots and timing experiments in the discussion period and it would be appreciated if these can be incorporated into the camera ready version.
train
[ "BJxop09FRQ", "ryeUF0qtRX", "HJl0JC9F0X", "BklBFxQkT7", "BkxW0nej3X", "HJxvPo8t3m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comments.\n\nTo check the quality of the uncertainties produced by our method, we used GPflow to perform the multi-class classification problem, on the full dataset, with a RobustMax likelihood. To ensure tractability in the harder case with a non-conjugate likelihood, we were forced to use sparse variational inference with 1,000 inducing points chosen randomly from the training set. (Though we expect that carefully optimized code exploiting the GPU for matrix inverses could solve the full problem in the non-conjugate setting). The resulting calibration curves were close to the diagonal, indicating accurate uncertainty estimation. We compared to a GP with an RBF kernel in the same setting, which gave a similarly good calibration curve, but worse test error (3.4% vs 2.4%).\n\nAs regards hyper-parameter optimization, it is important to note that our best architecture, the ResNet GP, was not optimized *at all*, we simply used the 32-layer ResNet architecture directly, replacing the final average-pooling layer with a dense layer. For the lower-performing architectures (ConvNet GP and Residual CNN GP), we followed Lee et al. (2017) in randomly sampling hyperparameters including e.g. the number of layers, and selected the network with the best validation error.\n\nRegarding scalability (as we discuss above in our response to reviewer 1), inverting the kernel matrix takes around a minute (though this can be reduced further by using MAGMA with multiple GPUs), whereas computing the kernel matrix takes longer. Critically, however, computing the kernel matrix is embarassingly parallel, so can be speeded up arbitrarily given sufficiently many GPU's. In contrast, neural network training is an inherently sequential computation, and as such improving training speed with additional GPU's remains an active research topic. Considering the question of practical applicability more broadly, it is important to make two points. First, we agree that there is a need for approximations schemes that are effective in the large-scale neural network domain, and we look forward to future research on this topic. Second, the Gaussian process solves a *really* hard problem: exact inference over all the parameters in an infinitely wide, multilayer ResNet. It is pretty shocking (to us at least) that this problem is tractable at all. In contrast, doing full-gradient HMC for a finite version of this network would be practically impossible.\n\nWe have added a new section in the Appendix, which describes how to extend the results in Matthews et al 2018b to our case.\n\nMinor comments:\n* Thanks, fixed.\n* If we didn't decrease the weights as we increased the number of channels, then the neural network outputs would blow up as we increased the number of channels, because the outputs would be the sum of an increasingly large number of terms. We decrease the weights as we increase the number of channels (as in Lee et al. 2017, and Matthews et al. 2018) to ensure that the neural network outputs have a sensible scale, even as we take the number of channels to infinity.\n* Lee et al. 2017 used a non-convolutional kernel, which is state-of-the-art for nonconvolutional networks, and we have noted this in the text.\n", "Thank you very much for your comments.\n\nQuestions\n- You're right that we design our networks in such a way that the weight-sharing in the convolutional filters becomes irrelevant. However, it is important to note that this only occurs in restricted circumstances. In particular, weight-sharing introduces correlations across different locations in a single feature map. It turns out that in many cases, you can design neural network priors with kernels that don't depend on these correlations, and that's what we do. However, for many neural networks, the correlations (and hence the weight sharing) will be relevant. For instance, if we use any type of pooling (such as average pooling at the output), then we would require the correlations within the feature maps, and hence we would need to take into account covariances and hence weight sharing throughout the whole network.\n- We agree that a comparison of the performance of finite and infinite CNNs as regards the loss-function, and the number of layers is important. Unfortunately, we weren't able to satisfy ourselves that we had a thorough, completely fair comparison in the available time, given the many choices available for SGD, so we'll restrict ourselves to three observations here. First, using proper likelihoods rather than the squared loss function in the GP appears to give little or no improvement in test error, though this may be because we need additional approximations. Second, depth is critical to achieving higher performance, with shallower architectures such as our ConvNet GP and Residual CNN GP (with a maximum of 16 layers) achieving a test error of around 1.0%, as opposed to 0.84% with a deeper 32 layer ResNet. Third, while our architecture is very similar to a ResNet (the only real difference being the replacement of average pooling at the last layer with a dense layer), the performance does not equal that of state-of-the-art residual networks. A thorough investigation of this phenomenon will form an important avenue for future research.\n\nSuggestions:\nWe have given a more thorough discussion of the results, including by introducing Table 1 in the Experiments section. We've also added a section to the appendix which extends the proofs in Matthews et al. (2018b) to the convolutional case.\n\nSmall questions/comments:\n- Thanks: fixed.\n- Thanks: fixed.\n- Even for multivariate regression, this is true: the output channels, not locations correspond to different classes. We have clarified this in the text. It would also be interesting to think about image-scale outputs, which would correspond to e.g. classifying each pixel/region in an image.\n- We need the covariances across data at all corresponding locations in the in the feature map. We have clarified this formally in the text.\n- We have explicitly noted that the kernel is computed for all mu (previously g).", "Thank you very much for your comments.\n\n1.) We have added a section in the Appendix, where we extend the proof by Matthews et al. (2018) to the convolutional case.\n\nWe have also done a new experiment looking at the behaviour of randomly sampled, finite convolutional networks (Fig. 2). This experiment shows that for 100 filters in the 1st convolutional layers, the behaviour of finite networks closely matches the infinite limit: the marginals are close to Gaussian, and the moments match closely. For comparison, typical ResNets use from 64 (He et al, 2016a) to 192 (Zagoruyko & Komodakis, 2016) channels in their first layers. \n\n2.) Indeed, inverting the kernel matrix takes around a minute (though this can be reduced further by using MAGMA with multiple GPUs), whereas computing the kernel matrix takes longer. Critically, however, computing the kernel matrix is embarassingly parallel, so can be speeded up arbitrarily given sufficiently many GPUs. In contrast, neural network training is inherently sequential, and as such improving training speed with additional GPUs remains an active research topic. In fact, one of our key results is to show that computing the kernel is surprisingly tractable, as special properties of the kernel corresponding to CNNs can be exploited such that the kernel computation for a pair of images becomes equivalent to a pass through a single-channel CNN.\n\nConsidering the question of practical applicability more broadly, it is important to make two points. First, we agree that there is a need for approximations schemes that are effective in the large-scale neural network domain, and we look forward to future research on this topic. Second, the Gaussian process solves a *really* hard problem: exact inference over all the parameters in an infinitely wide, multilayer ResNet. It is surprising that this problem is tractable at all. In contrast, doing full-gradient HMC for a finite version of the 32-layer ResNet is unlikely to be tractable.", "The current paper considers the relation between convolutional neural networks and Gaussian processes from theoretical and practical point of view.\n\nThe main contribution of the paper as presented by the authors is 2-fold:\n1. Some theoretical justifications about the correspondence between GPs and convolutional networks with infinitely many channels are provided.\n2. The formulas for GP kernel computation for the considered network is provided and some experiments are conducted (on MNIST).\n\nI personally enjoy the ideas of the close relation between certain types of neural networks and GPs and I really like the idea of authors that kernels based on convolutional networks might be more practical compared to the ones based on fully connected networks. It might be easier to encode certain invariance for complex objects via multilayered structures then via more simple explicit kernels.\n\nHowever, I see couple of important issues:\n1. The theoretical justification provided is basically heuristic argument and, speaking rigorously, is not a theorem. The proper proof should be based not on layer-by-layer convergence, but on the convergence with all the parameters tending simultaneously to infinity (see Matthews et al, 2018). Also, I doubt that the limit with infinite number of channels is as meaningful as the limit with infinite width of layer, as wide networks are used much more often in practice than networks with many channels.\n2. The practical applicability is very limited as the kernel obtained has very high computational complexity. The authors theirselves comment that computing kernel matrix takes more time than inverting it. Thus, the applicability beyond MNIST is a big question for the proposed approach.\n\nTo sum up, I think that the present paper targets an important direction of work, but the contribution itself is somehow limited (and relative obvious based on the recent papers on relation between fully connected networks and GPs).", "This paper\n\n1) extends an argument for the GP behaviour of deep, infinitely-wide fully-connected networks to convolutional and residual deep neural networks with infinitely many channels and\n2) provides a computationally tractable approach to compute the corresponding GP kernel. This kernel has few hyper-parameters, and achieves state-of-the-art results on the MNIST dataset. \n\nWhile point (1) is a relatively straightforward adaptation of Lee et. al (2017) and Matthews et al. (2018) to a different network structure, point (2) is original and non-trivial. All in all, I think this paper makes a significant contribution that I believe will spark interesting follow-up work (hinted at in the last section of the paper).\n\nQuestions:\n\n- In my understanding, the kernels of Section 3 do not require the weight matrices W to share the same values across rows. Accordingly, their performance cannot necessarily be explained by properties of convolutional filters (in particular translation invariance). Can the authors comment on that?\n- What would be the performance of a parametric CNN trained with SGD that matches the architecture (# layers) & the squared loss function of ResNet GP? The only point of comparison is Chen & al. (2018), which I suppose optimizes a log loss? Specifically, I would like to understand the impact of the loss function and of the number of layers on the relative performance of the two approaches.\n\nThe paper is clear and easy to follow. A few suggestions:\n\n- I recommend turning the argument in section 2.2 into a formal, self-contained theorem that states a result on A_L, defined in eq. 17 (which I would move to the main text). This would make the precise claim easier to understand.\n- I suggest including a more thorough discussion of the results. Table 1 is only introduced in the related work section.\n- If space is a concern, I would move part of Section 2.2 outside of the main text, since it mostly follows Lee et al. & Matthews et al\n\nSmall questions/comments:\n\n- Eqs 1 and 2: b_j should be multiplied by the all-ones vector, just like in (5) and (6).\n- Below eq. 5: \"while the *elements of the* feature maps themselves display...\"\n- Paragraph above eq. 7: \"in order to achieve an output suitable for *binary* classification or *univariate* regression\"\n- Paragraph above eq. 7: \"if we only need the covariance at *certain* locations in the outputs...\"\n- Algorithm 1: you might want to add a loop over g for clarity", "This paper shows that deep convolutional networks (CNNs, without pooling) with a suitable prior over weights can be seen as shallow Gaussian processes (GPs) with a specific covariance function. It shows that this covariance function can be computed efficiently (when compared to previous attempts at resembling convolutional networks with GPs), with a cost that only depends linearly on the number of layers and the input dimensionality, i.e.~O(N^2 L D). \n\nTo show the equivalence between deep CNNs and shallow GPs, the paper uses similar ideas to those proposed by Matthews et al (2018a) and Lee et al (2017), i.e. using the multivariate central limit theorem in very large networks, where in the case of this paper the limit is taken as the number of channels at each layer goes to infinity. Therefore, from a theoretical perspective, these ideas have been proposed before. However, the paper presents a novel efficient way to compute the convolutional kernel, which I believe has merits on its own. \n\nHowever, the model setting for classification (where deep CNNs have been successful) and consequent evaluation on the MNIST dataset is less than convincing. One of the main motivations for Bayesian CNNs and GPs (and the paper argue for this in the intro) is to be able to provide good uncertainty estimates. However, the classification problem is framed in a regression setting, where neither probabilistic estimates are evaluated or even provided. Indeed, only the error rate is given on Table 1. To me, this is certainly not enough for a Bayesian/GP method and it is a critical deficiency of the paper in its current form. While I understand having a non-Gaussian likelihood will complicate things and conflate the kernel contribution with the approximations, I believe it is necessary to provide and evaluate such probabilistic estimates and compare them to other GP approaches (even using other less than satisfying methods such as calibration/scaling). Along a similar vein, it is unclear what objective function was used for hyper-parameter learning but, given that the authors actually “sample hyper-parameters”, I am guessing a proper probabilistic objective such as the marginal likelihood is out of the question.\n\nOther (perhaps minor) deficiencies is that the method is not scalable to large datasets (I am even surprised the authors managed to run this on full MNIST) and that no theoretical analysis is done (e.g. as in Mattews et al, 2018a). \n\nMinor comments:\n\n* In the intro, “Other methods such as Gaussian Processes”: GPs are not a method and I believe the authors really mean here Gaussian process regression. \n* The prior variance over filters in Eq (3) divides over the number of channels. Why does a Gaussian prior with infinite precision make sense here?\n* The authors should report the state of the art of using GPs for MNIST classification using non-convolutional kernels).\n" ]
[ -1, -1, -1, 5, 8, 5 ]
[ -1, -1, -1, 5, 3, 4 ]
[ "HJxvPo8t3m", "BkxW0nej3X", "BklBFxQkT7", "iclr_2019_Bklfsi0cKm", "iclr_2019_Bklfsi0cKm", "iclr_2019_Bklfsi0cKm" ]
iclr_2019_BklhAj09K7
Unsupervised Domain Adaptation for Distance Metric Learning
Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space. This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated. Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one. To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space. Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain. In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.
accepted-poster-papers
This paper proposes a new solution for tackling domain adaptation across disjoint label spaces. Two of the reviewers agree that the main technical approach is interesting and novel. The final reviewer asked for clarification of the problem setting which the authors have provided in their rebuttal. We encourage the authors to include this in the final version. However, there is also a consensus that more experimental evaluation would improve the manuscript and complete experimental details are needed for reliable reproduction.
train
[ "SJei8ZU81E", "BklPeLVUyN", "HJx8mmk7Am", "Sy86MymAm", "rylttGymA7", "BJecfeN52Q", "HylVtcW53Q", "ByemonjVnm" ]
[ "author", "public", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hi Hui-Po, \n\nThanks for your comment.\n\nAs you mentioned, the conventional domain adaptation problems assume the same \"task\" between the source and the target domains and this allows to transfer discriminative knowledge (e.g., classifier) learned from the source domain to the target domain. On the other hand, not all domains with significant domain shift in the input data space share the same output label spaces, such as cross-ethnicity face recognition or other applications in [1].\n\nIn this work, we resolve such limitation of conventional domain adaptation methods and provide a framework that is also applicable when label spaces of two domains are disjoint by converting disjoint identification tasks into a shared verification task. Note that, as we clarified in our response to R3, the conversion of identification to verification allows the problem definition fits perfectly into that of domain adaptation as the source and target domains now have the shared verification task. That being said, the knowledge we are transferring from source to the target domain is verification, i.e., binary classification for pair of data being the same class or not. This is also evident from our theoretical analysis presented in Section 3 and Appendix A where we prove that the verification error defined on the pair of data from the target domain can be bounded by the verification error on the source pair and the domain discrepancy.\n\nHope this clarifies your concern on \"what kind of knowledge is being transferred\" between two domains. Please let us know if further clarification is required.\n\n[1] Luo et al., Label efficient learning of transferable representations across domains and tasks, NIPS 2017", "Hi authors,\n\nI appreciate you provide thorough and various extension of existing loss functions. However, I would like to know further what's the main problem you want to solve in this work. It seems not to be clear to me.\n\nLet me make a guess and maybe explain the main idea in other words. The proposed method is trying to leverage the \"semantic\" knowledge in the source domain and perform \"clustering\" on those target samples with unseen labels (because labels are disjoint).\n\nAssuming I am correct above, I would like to ask the following questions:\n\nIn conventional domain adaptation problem, we usually assume that both domains share some common knowledge so that you can utilize the knowledge (labels and corresponding discriminative power) from the source domain to solve similar problems in the target domain. In your work, however, both input and label spaces are \"disjoint\". I am curious what kind of knowledge you would like to transfer to the target domain and how you can make sure that the knowledge can be applied to those target samples with unseen labels. If these problems are not clarified, as mentioned by reviewer 3, I would say the major improvement all comes from MCEM, which performs clustering algorithm on target samples, instead of the proposed method.\n\nIf I made any mistake above, please correct me directly.\nThank you for your patient reading.\n\nbest,\n\n", "We thank the reviewer for their valuable comments.\n\n(In response to 3) We argue that many distance metric adaptation or transfer learning algorithms in deep learning are based on distribution matching. For example, [3,4] uses discriminator-based adversarial loss and [5] uses kernel-based MMD loss to reduce the domain discrepancy. Regardless of the discriminator or the kernel, these methods will push two domains closeby and thus have the same limitation as DANN. The proposed FTN resolves this issue by learning “domain-equivariant” representation and we provide empirical evidence (e.g. Table 2 or Figure 2(b-c)) using DANN as the most representative baseline. While one may try adding more components, such as deep supervision (e.g., applying MMD loss at multiple feature layers) as in [5], we believe that our contribution is orthogonal and complementary to those additional components. \n\n(In response to 3) We note that the MCEM is one of our novel contributions, which is only made available through our view on converting the classification task into verification. We agree that it plays a critical role to obtain a highly discriminative representation. For example, [6] considers a similar setting of domain adaptation with disjoint label spaces but they require labeled examples and complete definition of the label space of the target domain to apply classification-based adversarial adaptation learning and entropy regularization. Nonetheless, we provide the within-domain (Table 1) and cross-domain (Table 2) identification accuracy of DANN+MCEM below. We will include this result in the revision:\n\nDANN (for within-domain identification, CAU / AA / EA / ALL; for cross-domain, CAU / AA / EA):\nwithin-domain identification: 89.5 / 75.3 / 78.0 / 78.9\ncross-domain identification: 89.6 / 83.9 / 86.5\n\nDANN+MCEM: \nwithin-domain identification: 90.0 / 80.1 / 81.4 / 81.9\ncross-domain identification: 89.4 / 87.1 / 89.1\n\nFTN+MCEM:\nwithin-domain identification: 90.3 / 80.7 / 82.3 / 83.4\ncross-domain identification: 94.0 / 93.1 / 92.8\n\nSimilarly to the FTN, we observe improvement using MCEM with DANN, as compared to the DANN only model. Comparing between adaptation models with MCEM, we still observe better performance when combined with FTN. Especially, the contrast in performance becomes significant in cross-domain identification task, which confirms the unique capability of FTN in learning to transfer discriminative knowledge by alignment while separating representations across domains.\n\n\n(In response to 1) Our problem setting is adaptation from labeled source to unlabeled target with disjoint label spaces. Following the nomenclature of [1], it contains flavors from both domain adaptation (DA) and transfer learning (TL). The difference in input distribution between source and target domains and the lack of labels in the target domain are similar to that of DA or transductive TL [1], while the difference in label distribution and task definitions between two domains is akin to inductive TL [1,2]. In our work, we formalize this problem in domain adaptation framework using verification as a common task. This is a key contribution that allows theoretical analysis on the generalization bound as presented in Section 3 and Appendix A, while also allowing important novel applications like cross-ethnicity face recognition.\n\n\n(In response to 2) We acknowledged in the second paragraph of Section 2 some existing works on domain adaptation that use the verification loss for problems such as face recognition and person re-identification, while highlighting our novel contribution. We will include more discussion and references [5] related to this.\n\n\n[1] Pan and Yang, A survey on Transfer Learning, 2010\n[2] Daume, https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html\n[3] Ganin et al., Domain Adversarial Training of Neural Networks, JMLR 2016\n[4] Sohn et al., Unsupervised domain adaptation for face recognition in unlabeled videos, ICCV 2017\n[5] Hu et al., Deep Transfer Metric Learning, CVPR 2015\n[6] Luo et al., Label efficient learning of transferable representations across domains and tasks, NIPS 2017\n", "We thank the reviewer for their valuable comments.\n\nWe understand the concern in Table 3 that the performance improvement is not as significant as in Table 1. As mentioned in footnote 5, we observe that the ethnicity bias not only exists in the training dataset, but also in public benchmark datasets, such as LFW or IJB-A. While we observe the benefit of FTN over source only model in all evaluation metrics or over DANN in low FAR regime, thus requiring more within as well as cross-domain discriminativeness, we believe that these datasets may not be the best to evaluate the fairness of face recognition algorithms. This indeed is our motivation to collect an ethnicity-balanced test dataset for fair evaluation. We will make the dataset publicly available to the community upon publication.", "We thank the reviewer for their valuable comments.\n\n1. We clarify that the reference network is pretrained on the labeled source data and fixed over the training of DANN/FTN. In other words, the gradient in Equation(6) is only backpropagated through f, but not through f_{ref}.\n\nWe note that the training procedure of reference network resembles the training of teacher network in distillation framework [1], in the sense that both teacher network and our reference network are “pretrained and fixed” during the training of student or DANN/FTN, respectively.\n\n[1] Hinton et al., Distilling the knowledge in a neural network, NIPS 2014 DL Workshop\n\n2. We will add a reference (section 3 and appendix) as suggested.\n", "The authors studied an interesting problem of unsupervised domain adaptation when the source and the target domains have disjoin labels spaces. The paper proposed a novel feature transfer network, that optimizes domain adversarial loss and domain separation loss.\n\nStrengths:\n\n1) The proposed approach on Feature Transfer Network was novel and interesting.\n2) The paper was very well written with a good analysis of various choices.\n3) Extensive empirical analysis on multi-class settings with a traditional MNIST dataset and a real-world face recognition dataset. \n\n\nWeakness:\n1) Practical considerations addressing feature reconstruction loss needs more explanation.\n\nComments:\n\nThe technical contribution of the paper was sound and novel. The paper considered existing work and in a good way generalizes and extends into disjoint label spaces. It was easy to read and follow, most parts of the paper including the Appendix make it a good contribution. However, the reviewer has the following suggestions\" \n\n1. Under the practical considerations for preventing the mode collapse via feature reconstruction, how is the reference network trained? In the Equation(6) for feature reconstruction, the f_ref term maps the source and target domain examples to new feature space. What do you mean by references network trained on the label data? Please clarify.\n\n2. Under the practical considerations for replacing the verification loss, it is said that \"Our theoretical analysis suggests to use a verification\nthe loss that compares the similarity between a pair of images\" - Can you please cite the references to make it easier for the reader to follow.", "In this work, authors consider transfer learning problem when labels for the target domain is not available. Unlike the conventional transfer learning, they introduce a new loss that separates examples from different domains. Besides, they apply the multi-class entropy minimization to optimize the performance in the target domain. Here are my concerns.\n1.\tThe concept is not clear. For domain adaptation, we usually assume domains share the same label space. When labels are different, it can be a transfer learning problem.\n2.\tOptimizing the verification loss is conventional for distance metric learning based transfer learning and authors should discuss more in the related work.\n3.\tThe empirical study is not sufficient. There lacks the method of transfer learning with distance metric learning. Moreover, the major improvement seems from the MCEM rather than the proposed network. How about DANN+MCEM?\n", "I like the idea of the paper and I believe it addressing a very relevant problem. While the authors provide a good formalization of the problem and convincing demonstration of the generalization bound, the evaluation could have been better by including some more challenging experiments to really prove the point of the paper. It is surely good to present the toy example with the MNIST dataset but the ethnicity domain is less difficult than what the authors claim. This is also pretty evident from the results presented (e.g., in Table 3). The proposed approach provides maybe slightly better results than the state of the art but the results do not seem to be statistically significant. This is probable also due to the fact that the problem itself is made simpler by the cropped faces, no background, etc. I would have preferred to see an application domain where the improvement would be more substantial. Nevertheless, I think the theoretical presentation is good and I believe the manuscript has very good potential. " ]
[ -1, -1, -1, -1, -1, 8, 5, 8 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "BklPeLVUyN", "iclr_2019_BklhAj09K7", "HylVtcW53Q", "ByemonjVnm", "BJecfeN52Q", "iclr_2019_BklhAj09K7", "iclr_2019_BklhAj09K7", "iclr_2019_BklhAj09K7" ]
iclr_2019_BkloRs0qK7
A comprehensive, application-oriented study of catastrophic forgetting in DNNs
We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.
accepted-poster-papers
This paper has two main contributions. The first is that it proposes a specific framework for measuring catastrophic forgetting in deep neural networks that incorporates three application-oriented constraints: (1) a low memory footprint, which implies that data from prior tasks cannot be retained; (2) causality, meaning that data from future tasks cannot be used in any way, including hyperparameter optimization and model selection; and (3) update complexity for new tasks that is moderate and also independent of the number of previously learned tasks, which precludes replay strategies. The second contribution is an extensive study of catastrophic forgetting, using different sequential learning tasks derived from 9 different datasets and examining 7 different models. The key conclusions from the study are that (1) permutation-based tasks are comparatively easy and should not be relied on to measure catastrophic forgetting; (2) with the application-oriented contraints in effect, all of the examined models suffer from catastrophic forgetting (a result that is contrary to a number of other recent papers); (3) elastic weight consolidation provides some protection against catastrophic forgetting for simple sequential learning tasks, but fails for more complex tasks; and (4) IMM is effective, but only if causality is violated in the selection of the IMM balancing parameter. The reviewer scores place this paper close to the decision boundary. The most negative reviewer (R2) had concerns about the novelty of the framework and its application-oriented constraints. The authors contend that recent papers on catastrophic forgetting fail to apply these quite natural constraints, leading to the deceptive conclusion that catastrophic forgetting may not be as big of a problem as it once was. The AC read a number of the papers mentioned by the authors and agrees with them: these constraints have been, at least at times, ignored in the literature, and they shouldn't be ignored. The other two reviewers appreciated the scope and rigor of the empirical study. On the balance, the AC thinks this is an important contribution and that it should appear at ICLR.
train
[ "B1ljgG7inm", "H1lXXQf5n7", "ryereK-TnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the updates and rebuttals from the authors. \n\nI now think including the results for HAT may not be essential for the current version of the paper. I now understand better about the main point of the paper - providing a different setting for evaluating algorithms for combatting CF, and it seems the widespread framework may not accurately reflect all aspects of the CF problems. \n\nI think showing the results for only 2 tasks are fine for other settings except for DP10-10 setting, since most of them already show CF in the given framework for 2 tasks. Maybe only for DP10-10, the authors can run multiple tasks setting, to confirm their claims about the permuted datasets. (but, I believe the vanilla FC model should show CF for multiple permuted tasks.)\n\nI have increased my rating to \"6: Marginally above acceptance threshold\" - it could have been much better to at least give some hints to overcome the CF for the proposed setting, but I guess giving extensive experimental comparisons could be valuable for a publication. \n\n=====================\nSummary:\n\nThe paper evaluates several recent methods regarding catastrophic forgetting with some stricter application scenarios taken into account. They argue that most methods, including EWC and IMM, are prone to CF, which is against the argument of the original paper. \n\nPro:\n- Extensive study on several datasets, scenarios give some intuition and feeling about the CF phenomenon. \n\nCon:\n- There are some more recent baselines., e.g., Joan Serrà, Dídac Surís, Marius Miron, Alexandros Karatzoglou, \"Overcoming catastrophic forgetting with hard attention to the task\" ICML2018, and it would be interesting to see the performance of those as well. \n- The authors say that the permutation based data set may not be useful. But, their experiments are only on two tasks, while many work in literature involves much larger number of tasks, sometimes up to 50. So, I am not sure whether the paper's conclusion that the permutation-based SLT should not be used since it's only based on small number of tasks. \n- While the empirical findings seem useful, it would have been nicer to propose some new method that can get around the issues presented in the paper. ", "# [Updated after author response]\nThank you for your response. I am happy to see the updated paper. In particular, the added item in section 1.3 highlights where the novelty of the paper lies, and as a consequence, I think the significance of the paper is increased. Furthermore, the clarity of the paper has increased. \n\nIn its current form, I think the paper would be a valuable input to the deep learning community, highlighting an important issue (CF) for neural networks. I have therefore increased my score.\n\n------------------------------------------\n\n# Summary\nThe authors present an empirical study of catastrophic forgetting (CF) in deep neural networks. Eight models are tested against nine datasets with 10 classes each but a varying number of samples. The authors construct a number of sequential learning tasks to test the model performances in different scenarios. The main conclusion is that CF is still a problem in all models, despite claims in other papers.\n\n# Quality\nThe paper shows healthy criticism of the methods used to evaluate CF in previous works. I very much like this.\n\nWhile I like the different experimental set-ups and the attention to realistic scenarios outlined in section 1.2, I find the analysis of the experiments somewhat superficial. The accuracies of each model for each task and dataset are reported, but there is little insight into what causes CF. For instance, do some choices of hyperparameters consistently cause a higher/lower degree of CF across models? I also think the metrics proposed by Kemker et al. (2018) are more informative than just reporting the last and best accuracy, and that including these metrics would improve the quality of the paper.\n\n# Clarity\nThe paper is generally clearly written and distinct paragraphs are often highlighted, which makes reading and getting an overview much easier. In particular, I like the summary given in sections 1.3 and 1.4.\n\nSection 2.4 describing the experimental setup could be clearer. It takes a bit of time to decipher Table 2, and it would have been good with a few short comments on what the different types of tasks (D5-5, D9-1, DP10-10) will tell us about the model performances. E.g. what do you expect to see from the experiments of D5-5 that is not covered by D9-1 and vice versa? And why are the number of tasks in each category so different (8 vs 3 vs 1)?\n\nI am not a huge fan of 3D plots, and I don't think they do anything good in section 4. The perspective can make it tricky to compare models, and the different graphs overshadow each other. I would prefer 2D plots in the supplementary, with a few representative ones shown in the main paper. I would also experiment with turning Table 3 into a heat map.\n\n# Originality\nTo my knowledge, the paper presents the largest evaluation of CF in terms of evaluated datasets. Kemker et al. (2018) conduct a somewhat similar experiment using fewer datasets, but a larger number of classes, which makes the CF even clearer. I think it would be good to cite this paper and briefly discuss it in connection with the current work.\n\n# Significance\nThe paper is mostly a report of the outcome of a substantial experiment on CF, showing that all tested models suffer from CF to some extent. While this is interesting and useful to know, there is not much to learn in terms of what can cause or prevent CF in DNNs. The paper's significance lies in showing that CF is still a problem, but there is room for improvement in the analysis of the outcome of the experiments.\n\n# Other notes\nThe first sentence of the second paragraph in section 5 seems to be missing something.\n\n# References\nKemker, R., McClure, M., Abitino, A., Hayes, T., & Kanan, C. (2018). In AAAI Conference on Artificial Intelligence. https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16410", "The paper presents a study of the application of some well known methods on 9 datasets focusing on the issue of catastrophic forgetting when considering a sequential learning task in them. In general the presentation of concepts and results is a bit problematic and unclear. Comments, such that the paper presents ' a novel training and model selection paradigm for incremental learning in DNNs ' is not justified. A better description of the results, e.g., in Table 3 should be presented, as well a better linking with the findings; a better structure of the latter would also be required to improve consistency of them. Improving these could make the paper candidate for a poster presentation. " ]
[ 6, 7, 5 ]
[ 5, 3, 4 ]
[ "iclr_2019_BkloRs0qK7", "iclr_2019_BkloRs0qK7", "iclr_2019_BkloRs0qK7" ]
iclr_2019_BkltNhC9FX
Posterior Attention Models for Sequence to Sequence Learning
Modern neural architectures critically rely on attention for mapping structured inputs to sequences. In this paper we show that prevalent attention architectures do not adequately model the dependence among the attention and output tokens across a predicted sequence. We present an alternative architecture called Posterior Attention Models that after a principled factorization of the full joint distribution of the attention and output variables, proposes two major changes. First, the position where attention is marginalized is changed from the input to the output. Second, the attention propagated to the next decoding stage is a posterior attention distribution conditioned on the output. Empirically on five translation and two morphological inflection tasks the proposed posterior attention models yield better BLEU score and alignment accuracy than existing attention models.
accepted-poster-papers
The reviewers of this paper agreed that it has done a stellar job of presenting a novel and principled approach to attention as a latent variable, providing a new and sound set of inference techniques to this end. This builds on top of a discussion of the limitations of existing deterministic approaches to attention, and frames the contribution well in relation to other recurrent and stochastic approaches to attention. While there are a few issues with clarity surrounding some aspects of the proposed method, which the authors are encouraged to fine-tune in their final version, paying careful attention to the review comments, this paper is more or less ready for publication with a few tweaks. It makes a clear, significant, and well-evaluate contribution to the field of attention models in sequence to sequence architectures, and will be of great interest to many attendees at ICLR.
train
[ "HkeNhpWNyE", "H1lZgtGmyN", "HkeaY7QqR7", "BJe8NreK2m", "B1l0ox750X", "ByebxyX9C7", "r1lWSVVj0Q", "r1xb9bQ9AX", "H1lBWBQcCQ", "SylcXSSt6m", "HyezVpjwpm", "rJlU-8qJTX", "B1xBIw_93X", "HklfYhUe2m" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "public", "author", "author", "public", "public", "public", "official_reviewer", "official_reviewer" ]
[ "Thanks for the suggestion. We will take this into account and contextualize better in the next draft.", "I thank the authors for improving the clarity of the model derivation and updating the paper to mention related work and alternative derivations. I agree that the author's formulation provides novel and interesting insights. However, I would just like the final version of the paper to be more explicit - preferable in both the introduction and model derivation - about the relation of their models to the latent/hard attention models that have been discussed here. Just mentioning these papers in the related work section is not sufficient to fully contextualize this work (as was asked for by the other reviewers and commenters as well). Mentioning that these models are essentially neural generalizations of the classical IBM alignment models (Brown et al., 1993) is also helpful for contextualization. ", "We thank the reviewer for their feedback.\nWe have rewritten the derivation of our factorization and made the assumptions clearer in Section 2.2 .\nSection 2.2.1 has also been revised describing the different variants and their intuition, deriving them all from Eqn 4.\nWe have also fixed some notational discrepancies as pointed out by the reviewer for which we are thankful.\n\nQA\n1)\nWe have rewritten that section, but the simplification comes about because of the Markovian assumption that P(a_t|a_{<t}) = P(a_t|a_{t-1}). This makes \\sum_{a_{t-1}} P(a_t|a_{<t})P(a_{<t}|y_{<t}) = \\sum_{a_{t-1}} P(a_t|a_{<t}) P(a_{t-1}|y_{<t}). \n\n\n2)\nThe Taylor trick was used by [1] to simplify the expectation computation. Essentially if the average value of a function is computed at different points, one can compute the Taylor expansion of the function at average of the points leaving only second order terms.\n\n\\Sigma f(x_i) = \\Sigma f( xm + x_i - xm) = \\Sigma [ f(xm) + f’(xm)(x_i - xm) + second order terms ] = \\Sigma f(xm) + df(xm)\\Sigma(x_i - xm) + second order = \\Sigma f(xm) + df(xm)*0 + second order \\approx \\Sigma f(xm)\n\n3)\ns_t is the decoder state after feeding in output y_{t-1} and attention at step {t-1}. Like in standard seq2seq literature, we rely on the decoding RNN state to capture the dependence on history of output tokens. Under the assumption that y_t depends directly on attention 'a' at t and previous tokens, we use the decoder state s_t and the encoder state x_{a}. Indeed as pointed 'j' was a typo.\n\n4)\nThe main difference between the prior-joint and postr-joint model is which attention gets propagated further down. The prior-joint model behaves analogously to the standard soft-attention in ignoring any interaction between output and attention. In fact, it is a version of an IBM model 1. We have expanded on this in Section3 paragraph 7 and Section4 paragraph 1\n\n[1] Xu et al; Show, attend and tell: Neural image caption generation with visual attention , 2015\n", "Originality: Existing attention models do not statistically express interactions among multiple attentions. The authors of this manuscript reformulate p(y|x) and define prior attention distribution (a_t depends on previous outputs y_<t) and posterior attention distribution (a_t depends on current output y_t as well), and essentially compute the prior attention at current position using posterior attention at the previous position. The hypothesis and derivations make statistical sense, and a couple of assumptions/approximations seem to be mild. \n\nQuality: The overall quality of this paper is technically sound. It pushs forward the development of attention models in sequence to sequence mapping.\n\nClarity: The ideas are presented well, if the readers go through it slowly or twice. However, the authors need to clarify the following issues: \nx_a is not well defined. \nIn Section 2.2, P(y) as a short form of Pr(y|x_1:m) could be problematic and confusing in interpretation of dependency over which variables. \nPage 3: line 19 of Section 2.2.1, should s_{n-1} be s_{t-1}?\nIn Postr-Joint, Eq. (5) and others, I believe a'_{t-1} is better than a', because the former indicate it is attention for position t-1.\n\nI am a bit lost in the description of coupling energies. The two formulas for proximity biased coupling and monotonicity biased coupling are not well explained. \n\nIn addition to the above major issues, I also identified a few minors: \nsignificant find -> significant finding\nLast line of page 2: should P(y_t|y_<t, a_<n, a_n) be P(y_t|y_<t, a_<t, a_t)?\ntop-k -> top-K\na equally weighted combination -> an equally weighted combination\nSome citations are not used properly, such as last 3rd line of page 4, and brackets are forgotten in some places, etc.\nEnd of Section 3, x should be in boldface.\nnon-differentiability , -> non-differentiability,\nFull stop \".\" is missing in some places.\nLuong attention is not defined.\n\nSignificance: comparisons with an existing soft-attention model and an sparse-attention model on five machine translation datasets show that the performance of using posterior attention indeed are better than benchmark models. \n\nUpdate: I have read the authors' response. My current rating is final.\n", "We thank the reviewer for the feedback. We have discussed the papers mentioned by you and other reviewers in the Related work section, and also added new empirical comparisons. \n\nWe are also very grateful for suggesting the alternative derivation. We have added a discussion regarding your suggestion in Section 2.4 . We have also simplified our derivation by explicitly stating and pulling up the\nMarkov assumption about attention dependencies earlier.\n\n\nThe prior joint model is indeed related to a neural IBM model 1, and has been used in multiple recent works as also pointed out by Yoon Kim.\n\nFrom an efficiency perspective, the various posterior attention models are only marginally slower than prior-joint which does the more compute-intensive part of calculating P(y_t) for each of the top-K attentions. Thereafter, for tasks like translation, the coupled attention computation almost comes for \"free\". In fact, we observed no measurable difference in the average time per step between the two models\n\nMost seq2seq models rely upon attention feeding at all timesteps, and so we had not experimented with that model. We are providing some of the results of the experiment in the response here.\n Dataset B=4 B=10\nde-en 28.8 28.6\nen-de 24.0 23.9\nen-vi 26.9 26.6\n\nThese numbers are roughly on par with soft-attention and show the importance of feeding the attention context.\n\nWe also ran some experiments with the suggestion of feeding the prior attention, which are as follows\n B=4 B=10\nen-vi 27.3 27.0\nvi-en 25.7 25.7\n\nThese results are similar to or slightly worse than the prior-joint model. We are currently in the process of evaluating this on more tasks.", "\n1)\nYes, we have used the straight through estimator. On our larger datasets we were not able to do full enumeration because of memory constraint. For En-Vi we can run the exact enumeration and for that task the top-k marginalization reduced time per-step by around 50\\% (0.354s vs 0.655s per step) and the required memory by a factor of 4 with very minor impact on BLEU\n\n2)\nWe thank you for giving pointers to related work. The reviewers also pointed similar works. We have discussed them in the Related Work section of the revised version. Also, we have included some experimental comparisons with all of these.\n", "Thanks for the detailed response!", "We thank the reviewer for the comments.\nIn light of comments about some of the notation and description from all reviewers, we have revised the model description considerably. We have also fixed some notational inconsistencies as pointed out.\n\nWe have also revised Section 2.2.1 to better explain the formula and intuition of the coupling energies.", "We have rewritten Section 2.2 of the paper, which simplifies the presentation and makes the need for posterior attention more obvious. The network architecture and connections are the same as standard soft attention model. The difference is entirely on how attention is computed.", "The contribution is interesting, but besides the experimental part is a little bit too dry. The paper would immensely benefit of a more high level description and insights about the architecture proposed, as well as a graphical representation (such as a block diagram) to make the architecture understandable at a first glance.", "Yes it'd be nice to see a comparison of this work to (Deng et al., 2018) which also models attention as a latent variable and has released code here: https://github.com/harvardnlp/var-attn", "Hi there, thanks for a very nice paper. It is great to see that posterior inference substantially increases alignment accuracy! I also liked the application of the model across a diverse range of languages/tasks.\n\nI had one quick question, and one comment:\n\nQuestion: \n- How do you differentiate through the top-K approximation? Do you use the straight through estimator? How much faster was top K vs actually enumerating?\n\nComment:\n- There are several recent works that have also formalized attention as a latent variable and have exactly/approximately optimized the log marginal likelihood. It would be great to see this work put in context of existing work!\n\nWu et al. Hard Non-Monotnic Attention for Character-Level Transduction. EMNLP 2018\nShankar et al. Surprisingly Easy Hard-Attention for Sequence to Sequence Learning. EMNLP 2018.\nDeng et al. Latent Alignment and Variational Attention. NIPS 2018.", "This paper proposes a new sequence to sequence model where attention is treated as a latent variable, and derive novel inference procedures for this model. The approach obtains significant improvements in machine translation and morphological inflection generation tasks. An approximation is also used to make hard attention more efficient by reducing the number of softmaxes that have to be computed. \n\nStrengths:\n- Novel, principled sequence to sequence model.\n- Strong experimental results in machine translation and morphological inflection.\nWeaknesses:\n- Connections can be made with previous closely related architectures.\n- Further ablation experiments could be included. \n\nThe derivation of the model would be more clear if it is first derived without attention feeding: The assumption that output is dependent only on the current attention variable is then valid. The Markov assumption on the attention variable should also be stated as an assumption, rather than an approximation: Given that assumption, as far as I can tell the (posterior) inference procedure that is derived is exact: It is indeed equivalent to the using the forward computation of the classic forward-backward algorithm for HMMs to do inference. \nThe model’s overall distribution can then be defined in a somewhat different way than the authors’ presentation, which I think makes more clear what the model is doing:\np(y | x) = \\sum_a \\prod_{t=1}^n p(y_t | y_{<t}, x, a_t) p(a_t | y_{<t}, x_ a_{t-1}). \nThe equations derived in the paper for computing the prior and posterior attention is then just a dynamic program for computing this distribution, and is equivalent to using the forward algorithm, which in this context is:\n \\alpha_t(a) = p(a_t = a, y_{<=t}) = p(y_t | s_t, a_t =a) \\sum_{a’} \\alpha_{t-1}(a’) p(a_t = a | s_t, a_{t-1} = a’) \n\nThe only substantial difference in the inference procedure is then that the posterior attention probability is fed into the decoder RNN, which means that the independence assumptions are not strictly valid any more, even though the structural assumptions are still encoded through the way inference is done. \n[1] recently proposed a model with a similar factorization, although that model did not feed the attention distribution, and performed EM-like inference with the forward-backward algorithm, while this model is effectively computing forward probabilities and performing inference through automatic differentiation.\n\nThe Prior-Joint variant, though its definition is not as clear as it should be, seems to be assuming that the attention distribution at each time step is independent of the previous attention (similar to the way standard soft attention is computed) - the equations then reduce to a (neural) version of IBM alignment model 1, similar to another recently proposed model [2]. These papers can be seen as concurrent work, and this paper provides important insights, but it would strengthen rather than weaken the paper to make these connections clear. \n\nThe results clearly show the advantages of the proposed approach over soft and sparse attention baselines. However, the difference in BLEU score between the variants of the prior or posterior attention models is very small across all translation datasets, so to make claims about which of the variants are better, at a minimum statistical significance testing should be done. Given that the “Prior-Joint” model performs competitively, is it computationally more efficient that the full model? \n\nThe main missing experiment is not doing attention feeding at all. The other experiment that is not included (as I understood it) is to compute prior and posterior attention, but feed the prior attention rather than the posterior attention. \n\nThe paper is mostly written very clearly, there are just a few typos and grammatical errors in sections 4.2 and 4.3. \n\nOverall, I really like this paper and would like to see it accepted, although I hope that a revised version would make the assumptions the model is making clearer and make connections to related models clearer. \n \n[1] Neural Hidden Markov Model for Machine Translation, Wang et al, ACL 2018. \n[2] Hard Non-Monotonic Attention for Character-Level Transduction, Wu, Shapiro and Cotterell, EMNLP 2018. ", "Pros:\n1. This work presents a novel construction of the popularly-used attention modules. It points out the problems lied in existing design that attention vectors are only computed based on parametric functions, instead of considering the interactions among each attention step and output variables. To achieve that, the authors re-write the joint distribution as a product of tractable terms at each timestamp and fully exploit the dependencies among attention and output variables across the sequence. The motivation is clear, and the proposed strategy is original and to the point. This makes the work relative solid and interesting for a publication. Furthermore, the authors propose 3 different formulation for prior attention, making the work even stronger.\n2. The technical content looks good, with each formula written clearly and with sufficient deductive steps. Figure 1 provides clear illustration on the comparison with traditional attentions and shows the advantage of the proposed model.\n3. Extensive experiments are conducted including 5 machine translation tasks as well as another morphological inflection task. These results make the statement more convincing. The authors also conducted further experiments to analyze the effectiveness, including attention entropy evaluation.\n\nCons:\n1. The rich information contained in the paper is not very well-organized. It takes some time to digest, due to some unclear or missing statements. Specifically, the computation for prior attention should be ordered in a subsection with a section name. The 3 different formulations should be first summarized and started with the same core formula as (4). In this way, it will become more clear of where does eq(6) come from or used for. Currently, this part is confusing.\n2. Many substitutions of variables take place without detailed explanation, e.g., y_{<t} with s_t, a with x_{a} in (11) etc. Could you explain before making these substitutions?\n3. As mentioned, the PAM actually computes hard attentions. It should be better to make the statement more clear by explicitly explaining eq(11) on how it assembles hard attention computation.\n\nQA:\n1. In the equation above (3) that computes prior(a_t), can you explain how P(a_{t-1}|y_{<t}) approximates P(a_{<t}|y_{<t})? What's the assumption?\n2. How is eq(5) computed using first order Taylor expansion? How to make Postr inside the probability? And where does x_a' come from?\n3. Transferring from P(y) on top of page 3 to eq(11), how do you substitute y_{<t}, a_t with s_t, x_j? Is there a typo for x_j?\n4. Can you explain how is the baseline Prior-Joint constructed? Specifically, how to compute prior using soft attention without postr?" ]
[ -1, -1, -1, 9, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "H1lZgtGmyN", "B1l0ox750X", "HklfYhUe2m", "iclr_2019_BkltNhC9FX", "B1xBIw_93X", "HyezVpjwpm", "ByebxyX9C7", "BJe8NreK2m", "SylcXSSt6m", "iclr_2019_BkltNhC9FX", "rJlU-8qJTX", "iclr_2019_BkltNhC9FX", "iclr_2019_BkltNhC9FX", "iclr_2019_BkltNhC9FX" ]
iclr_2019_Bkx0RjA9tX
Generative Question Answering: Learning to Answer the Whole Question
Discriminative question answering models can overfit to superficial biases in datasets, because their loss function saturates when any clue makes the answer likely. We introduce generative models of the joint distribution of questions and answers, which are trained to explain the whole question, not just to answer it.Our question answering (QA) model is implemented by learning a prior over answers, and a conditional language model to generate the question given the answer—allowing scalable and interpretable many-hop reasoning as the question is generated word-by-word. Our model achieves competitive performance with specialised discriminative models on the SQUAD and CLEVR benchmarks, indicating that it is a more general architecture for language understanding and reasoning than previous work. The model greatly improves generalisation both from biased training data and to adversarial testing data, achieving a new state-of-the-art on ADVERSARIAL SQUAD. We will release our code.
accepted-poster-papers
All reviewers recommend accept. Discussion can be consulted below.
train
[ "ByeMpnB_CQ", "rJg7OxYc3X", "HJe5i_MDRX", "rkehP_GP0m", "SklmEufv0Q", "rJxuRDGPCQ", "rkllvsQsTX", "HyxAZ3Q82m", "ryeIrWQch7", "H1xkn3XCjm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "When I saw this description, I thought you were comparing against Clark and Gardner 2018 (https://arxiv.org/abs/1710.10723; DocQA). I hadn't seen Weaver before, and I was surprised there there hasn't been a comparison between Weaver and DocQA (so I'm not actually sure which is better). DocQA only requires training with two paragraphs at a time, not the full document, so the argument about scalable training rings a bit hollow (it's a constant factor, not dependent on document length). It'd be best to compare against that work also if you want to make claims about multi-paragraph performance, or anything really on TriviaQA (looks like DocQA has also long since been beaten).", "This paper proposes a generative approach to textual QA on SQUAD and visual QA on CLEVR dataset, where, a joint distribution over the question and answer space, given the context (image or Wikipedia paragraphs) is learned (p(q,a|c)). During inference the answer is selected by argmax p(q,a|c) that is equal to p(a|c,q) if the question is given. Authors propose an architecture shown in Fig. 3 of the paper, where generation of each question word is condition on the corresponding answer, context and all the previous words generated in the question so far. The results compared to discriminative models are worse on SQUAD and CLEVR. Nevertheless, authors show that given the nature of the model that captures more complex relationships, the proposed model performs better than other models on a subset of SQUAD that they have created based on answer type (number/date/people), and also on adversarial SQUAD. \n\nComments / questions:\n\nThe paper is well written, except for a few parts mentioned below, all the equations / components are explained clearly. The motivation of the paper is clearly stated as using generative modelling in (V)QA to overcome biases in these systems, e.g., answering questions by just using word matching and ignoring the context (context=image or Wikipedia paragraph). I have the following questions / comments about the paper which addressing them by authors will help to better understand/evaluate the paper:\n1.\tIn page 3 on the top of section 2.3, can authors provide a more clear explanation of the additional 32-dimensional embedding added to each word representation? Also in Table 2, please add an ablation how much gain are you getting from this?\n2.\tIn the same page (page 3), section 2.4, paragraph 2, put the equation in a separate line and number it + clearly explain how you have calculated s^{endpoints} and s{length}.\n3.\tIn page 4 section 2.5.2 paragraph 2, the way the bias term is calculated and the incentive behind it is not clear. Can authors elaborate on this?\n4.\tIn page 6 section 3.2 the first paragraph authors claim that their model is performing multihop reasoning on CLEVR, while there is no explicit component in their model to perform multiple rounds of reasoning. Can authors clarify their statement? \n5.\tIn section 3.3 the third paragraph, where authors explain the question agnostic baselines, can they clarify what they mean by “the first answer of the correct type”? \n6.\tIn Table 5 and section 3.4 the second paragraph, authors are stating that “… The improvement may be due to the model’s attempt to explain all question words, some of which may be unlikely under the distractor”. It is very important that the authors do a complete ablation study similar to that of Table 2 to clarify how much gain is achieved using each component of generative model. \n7.\tIn page 8 under related works: \na.\tIn paragraph 2 where authors state “Duan et al. (2017) and Tang et al. (2017) train answering and generation models with separate parameters, but add a regularisation term that encourages the models to be consistent. They focus on answer sentence selection, so performance cannot easily be compared with our work.”. I do not agree that the performance can not be compared, it is easily comparable by labeling a sentence containing the answer interval as the answer sentence. Can authors provide comparison of their work with that of Duan et al. (2017) and Tang et al. (2017)?\nb.\tIn the same paragraph as 7.a, the authors have briefly mentioned “Echihabi & Marcu (2003) describe an earlier method for answering questions in terms of the distribution of questions given answers.” Can they provide a more clear explanation of this work and its relation to / difference with their work? \n\n////////////\nI would like to thank authors for providing detailed answers to my questions. After reading their feedback, I am now willing to change my score to accept. ", "Thanks for the helpful comments and feedback, which will let us improve the final version.\n\n- Section 2.4: what happens when there are multiple QA pairs per paragraph or image? Are you just getting conflicting gradients at different batches, so you'll end up somewhere in the middle of the two answers? Could you do better here?\n\nWe don't think there's a problem here: each QA pair can be viewed as a sample from the space of QAs that can be asked for that context, and the model will learn to capture this distribution.\n\n> - Section 2.6: The equation you're optimizing there reduces to -log p(a|q,c), which is exactly the loss function used by typical models. You should note that here. It's a little surprising (and interesting) that training on this loss function does so poorly compared to the generative training. This is because of how you've factorized the distributions, so the model isn't as strong a discriminator as it could be, yes?\n\nWe believe the issue here is because even though the loss function is mathematically equivalent, our factorization requires us to learn a conditional language model, and the discriminative loss function does not provide enough learning signal to train such a model.\n\n> - Section 3.1 (and section 2.6): Can you back up your claim of \"modeling more complex dependencies\" in the generative case? Is that really what's going on? How can we know? What does \"modeling more complex dependencies\" even mean? I don't think these statements really add anything currently, as they are largely vacuous without some more description and analysis.\n\nThanks for the feedback. The intuition behind this statement is that a generative model has to learn more information connecting the question and document, because generating a question is more difficult than answering it. We agree that it is unclear as currently written, and will rephrase or remove it.\n", "Thanks for the review and constructive comments! \n\nThe main concern is that our results on SQuaD are beneath the current state of the art. Our generative approach means that our architecture is very different from existing models - we can't simply change the loss function for BiDAF, for example. These discriminative architectures have been carefully iterated on by a large community for several years. The fact that we are within a few points with the first generative approach is encouraging, and it seems reasonable that significant improvements would be possible with more development. \n\nWe also emphasize that our model outperforms all discriminative models on adversarial SQuAD, demonstrating that it has learnt something more robust. We also show that our architecture can perform multi-hop reasoning, which has not been shown for any other strong SQuAD model.\n\n\n> In Table 1, it should be clear if the authors could category those models into with/without ELMo for easy compassion. Furthermore, it is unclear how the authors select those baselines since there are many results on the SQuAD leaderboard. For example, there are many published systems outperformed e.g., RaSOR. \n\nTo keep the results table to a manageable size, we included a representative sample of existing approaches. \n\n> During inference, generating answer candidates should be important. How the number of candidate affects the results and the inference time? \nThe inference time grows linearly in the number of answer candidates. We found that the beam starts to saturate at about 100 answers, covering about 99% of correct answers. \n\nModel \t\t \t EM / F1 Inf Speed for Valid (sec)\nGQA, 250 answer candidates 76.8 / 83.7\t\t\t 617.69\n200 answer candidates\t\t 76.6 / 83.4\t\t 535.35\n100 answer candidates\t\t 76.2 / 83.1\t\t\t 359.67\n50 answer candidates\t\t\t 74.6 / 81.4\t\t\t 262.42\n10 answer candidates\t\t\t 55.7 / 61.4\t\t\t 200.91 \n\n\n> In SQuAD dataset, its answers often contain one or two tokens/words. What is the performance if removed length of answer feature?\n\nModel \t\t Generative (EM) +Fine Tuning (EM / F1)\nGQA \t\t\t 72.3 76.8 / 83.7\nNo answer length feature 72.0\t\t\t 73.8 / 80.1\n\nIt is quite interesting that the length feature is particularly helpful for fine-tuning. During generative training, the question generation model is mostly exposed to short answers, because it is only shown the gold answer. However, at test time, it mostly sees very long answers, because most possible answers are long, and its performance may be weak on these. The answer length feature makes it easy for fine-tuning to compensate for this imbalance. We will update the paper with this ablation.\n\n", "Thanks for the review and detailed feedback, which we’ll be happy to address in the final submission. Answers to questions are beneath.\n\n> In page 3 on the top of section 2.3, can authors provide a more clear explanation of the additional 32-dimensional embedding added to each word representation? Also in Table 2, please add an ablation how much gain are you getting from this?\n\nThanks, have expanded the explanation, and an ablation is beneath. Including this embedding makes it easier for the model to learn the relationship between the document word and answer. However, it does significantly increase the computational cost of inference with the model, because we have to compute a separate contextualized document representation for each candidate answer.\n\nModel \t\t Generative (EM) Fine Tuning (EM / F1)\nGQA \t\t\t 72.3 76.8 / 83.7\nNo 32 dim embedding 66.9\t\t\t\t 69.2 / 75.9\n\n\n>\tIn page 6 section 3.2 the first paragraph authors claim that their model is performing multihop reasoning on CLEVR, while there is no explicit component in their model to perform multiple rounds of reasoning. Can authors clarify their statement? \nThe model can perform multiple rounds of reasoning as a by-product of explaining the question word-by-word. On CLEVR, the model must explain each question word in turn, allowing it to track the relevant objects in complex chains of reasoning (see Figure 4).\n\n>\tIn page 4 section 2.5.2 paragraph 2, the way the bias term is calculated and the incentive behind it is not clear. Can authors elaborate on this?\nThe motivation is that it can allow the model to quickly focus its attention on relevant parts of the paragraph (which is typically several hundred words). We found that it improved convergence pruning out irrelevant parts of the input.\n\n> \tIn section 3.3 the third paragraph, where authors explain the question agnostic baselines, can they clarify what they mean by “the first answer of the correct type”? \nHere, we simply mean that e.g. for a question whose answer is a person, we return the first person in the evidence paragraph. We will clarify this in the paper.\n\n>\tIn Table 5 and section 3.4 the second paragraph, authors are stating that “… The improvement may be due to the model’s attempt to explain all question words, some of which may be unlikely under the distractor”. It is very important that the authors do a complete ablation study similar to that of Table 2 to clarify how much gain is achieved using each component of generative model. \n\nWe chose not to present ablations for adversarial SQuAD in the submission, because there is no validation data, so we only performed a single run on the data with our best model (selected on the standard SQuAD data). The fact that our approach performs better than models that outperform it on SQuAD is strong evidence that it has learnt something more robust from the same training data. Please let us know if you still think including these ablations would be helpful.\n\n> a.\tIn paragraph 2 where authors state “Duan et al. (2017) and Tang et al. (2017) train answering and generation models with separate parameters, but add a regularisation term that encourages the models to be consistent. They focus on answer sentence selection, so performance cannot easily be compared with our work.”. I do not agree that the performance can not be compared, it is easily comparable by labeling a sentence containing the answer interval as the answer sentence. Can authors provide comparison of their work with that of Duan et al. (2017) and Tang et al. (2017)?\nThere isn't actually enough detail in these papers to replicate their non-standard experimental setup---given that there are only a few sentences in each paragraph, a lot would depend on how exactly they segmented the input into sentences. However, the reported accuracies are only a few percentage points higher than our approach on this much easier task, so it seems unlikely that their results would be competitive. \n\n> b.\tIn the same paragraph as 7.a, the authors have briefly mentioned “Echihabi & Marcu (2003) describe an earlier method for answering questions in terms of the distribution of questions given answers.” Can they provide a more clear explanation of this work and its relation to / difference with their work? \nEchihabi & Marcu train a model for p(q|a,c) using a rather complex combination of heuristics and classical machine translation methods, and return the answer maximizing this distribution. A conceptual difference is that our approach models p(q,a|c), and to our knowledge is the first generative question answering model. Beyond the fact that they use a model of p(q|a) for question answering, there isn't much overlap in terms of motivation or techniques.\n", "Thanks to all the reviewers, we are happy that they all found the ideas in the paper to be interesting.\n\nWe’ve added one additional experiment beyond what was requested (Section 3.5). We explore question answering when the answer can be contained in one of many paragraphs. This task is computationally expensive to train properly with discriminative models, because at training time you would ideally want to discriminate against all the negative answers from all paragraphs. In our approach, most of the work is done by the model of p(q|a,c), which only depends the paragraph c containing the gold answer a. That means we can train the model using single paragraphs, but test it on multiple paragraphs.\n\nWe outperform the best previous work that was trained on single paragraphs (as ours was) by almost 10 F1, and the best approaches trained on multiple paragraphs by 2.5 F1. This experiment highlights a further advantage of generative question answering.\n", "Just starting a conversation with other reviewers. I feel pretty strongly that this paper should be accepted. We should not be fixating on leaderboard performance numbers and blackbox comparisons. Science is much more broad than \"who has the best experimental result\". The presented method in the paper works well, it's a very interesting, novel idea, and the paper is well written.", "This paper introduces a generative model for question answering. Instead of modeling p(a|q,c), the authors propose to model p(q,a|c), factorized as p(a|c) * p(q|a,c). This is a great idea, it was executed very well, and the paper is very well written. I'm glad to see this idea implemented and working. \n \nReactions: \n- Section 2.1: Is there a bias problem here, where you're only ever training with the correct answer? Oh, I see you covered that in section 2.6. Great.\n- Section 2.4: what happens when there are multiple QA pairs per paragraph or image? Are you just getting conflicting gradients at different batches, so you'll end up somewhere in the middle of the two answers? Could you do better here?\n- Section 2.6: The equation you're optimizing there reduces to -log p(a|q,c), which is exactly the loss function used by typical models. You should note that here. It's a little surprising (and interesting) that training on this loss function does so poorly compared to the generative training. This is because of how you've factorized the distributions, so the model isn't as strong a discriminator as it could be, yes?\n- Section 3.1 (and section 2.6): Can you back up your claim of \"modeling more complex dependencies\" in the generative case? Is that really what's going on? How can we know? What does \"modeling more complex dependencies\" even mean? I don't think these statements really add anything currently, as they are largely vacuous without some more description and analysis.\n- Section 3.3: Your goal here seems similar to the goal of Clark and Gardner (2018), trying to correctly calibrate confidence scores in the face of SQuAD-like data, and similar to the goals of adding unanswerable questions in SQuAD 2.0. I know that what you're doing isn't directly comparable to either of those, but some discussion of the options here for addressing this bias, and whether your approach is better, could be interesting.\n \nClarity issues: \n- Bottom of page 2, \"sum with a vector of size d\" - it's not clear to me what this means. \n- Top of page 3, \"Answer Encoder\", something is off with the sentence \"For each word representation\" \n- Section 2.5, \"we first embed words independently of the question\" - did you mean \"of the _context_\"?\n- Section 2.5.2 - it's not clear to me how that particular bias mechanism \"allows the model to easily filter out parts of the context which are irrelevant to the question\". The bias mechanism is independent of the question.\n- Section 2.7 - when you said \"beam search\", I was expecting a beam over the question words, or something. I suppose a two-step beam search is still a beam search, it just conjured the wrong image for me, and I wonder if there's another way you can describe it that better evokes what you're actually doing.\n- Section 3.1 - \"and are results...\" - missing \"competitive with\"? \n- Last sentence: \"we believe their is\" -> \"we believe there is\" ", "In this paper, authors proposed a generative QA model, which optimizes jointly the distribution of questions and answering given a document/context. More specifically, it is decomposed into two components: the distributions of answers given a document, which is modeled by a single layer neural network; and the distribution of questions given an answer and document, which is modeled by a seq2seq model with a copy mechanism. During inference, it firstly extracts the most likely answer candidates, then evaluates the questions conditioned on the answer candidates and document and finally returns the answer with the max joint score from two aforementioned components.\n\n\nPros: \nThe paper is well written and easy to follow. \n\nThe ideas are also very interesting. \n\nIt gives a good ablation study and shows importance of each component in the proposed model.\n\n\nCons:\nThe empirical results are not good. For example, on the SQuAD dataset, since the proposed model also used ELMo (the large pre-trained contextualized embedding), cross attentions and self-attentions, it should be close or better than the baseline BiDAF + Self Attention + ELMo. However, the proposed model is significantly worse than the baseline (83.7 vs 85.6 in terms of F1 score). From my experience of the baseline BiDAF + Self Attention + ELMo, it obtains 1 more point gain if you fine tune the models. On CLEVER dataset, I agree that incorporating with MAC cells will help the performance.\n\nIn Table 1, it should be clear if the authors could category those models into with/without ELMo for easy compassion. Furthermore, it is unclear how the authors select those baselines since there are many results on the SQuAD leaderboard. For example, there are many published systems outperformed e.g., RaSOR. \n\nQuestions:\nDuring inference, generating answer candidates should be important. How the number of candidate affects the results and the inference time? \n\nIn SQuAD dataset, its answers often contain one or two tokens/words. What is the performance if removed length of answer feature?\n\nDuring the fine turning step, have you tried other number of candidates? ", "Our SQuAD test results were missing from the submission because of a technical problem with the evaluation server. Our results are now available as 77.090 (Exact Match) 83.931 (F1). This model was submitted before the ICLR deadline,." ]
[ -1, 7, -1, -1, -1, -1, -1, 8, 6, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "rJxuRDGPCQ", "iclr_2019_Bkx0RjA9tX", "HyxAZ3Q82m", "ryeIrWQch7", "rJg7OxYc3X", "iclr_2019_Bkx0RjA9tX", "iclr_2019_Bkx0RjA9tX", "iclr_2019_Bkx0RjA9tX", "iclr_2019_Bkx0RjA9tX", "iclr_2019_Bkx0RjA9tX" ]
iclr_2019_BkxWJnC9tX
Diversity and Depth in Per-Example Routing Models
Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works. Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions. Both architectural diversity and routing depth can increase the representational power of a routing network. In this work, we address both of these deficiencies. We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth. In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup. However, when scaling up routing depth, we find that modern routing techniques struggle with optimization. We conclude by discussing both the positive and negative results, and suggest directions for future research.
accepted-poster-papers
pros: - good, clear writing - interesting analysis - very important research area - nice results on multi-task omniglot cons: - somewhat limited experimental evaluation The reviewers I think all agree that the work is interesting and the paper well-written. I think there is still a need for more thorough experiments (which it sounds like the authors are undertaking). I recommend acceptance.
train
[ "ryxf_zfv6X", "rkegxvXQ14", "S1eyvl60C7", "SJeR4arqRQ", "Byg9-aB50Q", "B1exyaBqAX", "rygVK_xR2m", "S1lhnTr5hX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper \"Diversity and Depth in Per-Example Routing Models\" extends previous work on routing networks by adding diversity to the type of architectural unit available for the router at each decision and by scaling to deeper networks. They evaluate their approach on Omniglot, where they achieve state of the art performance.\n\nOverall, the paper is very well written and every aspect can be easily understood. The overview over related work given in the paper is thorough, and the authors explain very well how their approach relates to previous approaches.\n\nThe architecture presented is a natural and important extension of previous work. Adding diversity in routing units has indeed not been investigated well and is an important contribution to the community. Additionally, the authors do a good job of identifying problems with existing approaches (overfitting, routing depth) and offer a empirically convincing solutions. \n\nThe result section given in the paper is its weakness and requires a more in-depth analysis:\n+ the results given for Omniglot are impressive\n+ the experiments analyzing the impact of diversity and routing depth are interesting and offer interesting insight into the architecture\n- the results do not show learning behavior over epochs; this is not necessary, but would give an additional insight into the learning behavior of the architecture\n- the experimental settings are confusing: why are the different experiments performed with different datasets? This makes it seem as if the authors cherry-picked the best results for the different experiments (this might not be the case, but the results on Omniglot alone are good enough that negative results and a detailed discussion of them would not have hurt the paper, but enriched the discourse)\n- additional experiments that offer a transition from larger datasets to smaller ones would be interesting; seeing how the performance of the architecture behaves e.g. on CIFAR10 for 1k, 5k, 10k, 25k and 50k would have illustrated how well the architecture is able to generalize from different numbers of samples\n\nIn summary, I think the paper analyzes a very important problem and has a lot of potential. However, it needs more extensive experiments that illustrate how the proposed architecture behaves over a wider variety of datasets.\n\nUPDATE AFTER REBUTTAL:\nI am still torn about this paper. On one hand, I still think that the topic and discourse provided by this paper is extremely important. On the other, the results - even after the revision - do not completely convince me. I might update my score after some discussion with the other reviewers.\n2ND UPDATE:\nAfter giving it some more thought, I find myself convinced that this paper has a contribution important enough to be accepted. I increase my score to 7.", "First of all, thank you for following up!\n\n\n>>> re: overfitting\n\nIndeed, we are also quite surprised that these models can perform so well on low resource tasks. However, this phenomena is not unique to our model and actually reflects an observation made by numerous researchers in our community. Namely, massively overparameterized models seem to perform better than models with fewer parameters, even if the number of examples in the dataset is orders of magnitude smaller than parameters. The most recent example of this phenomena is Huang et al. (2018) [1], who train a model with 557M parameters and achieve a new SOTA (for randomly initialized models) on Imagenet, which has 1.28M images. When they finetune on CIFAR-10, which has 50K images, they also set a new SOTA at 1.0% error rate. Analyzing this phenomena is an active area of research right now (for example, see Arora et al. (2018) [2]), and we too are looking forward to a better understanding of this behavior.\n\n\n>>> re: more datasets\n\nWe added a new revision during the revision period that analyzed more 4 more datasets and a new type of model. We focused on providing fair comparisons between models (e.g., matching parameter count, hyperparameter tuning models the same amount). Please take a look at Section 4.2.2 in our updated paper.\n\nAs a more general point, we believe that routing models are still in the \"ugly duckling\" phase, where there are pockets of interesting results, but nothing yet has been truly convincing. We draw an analogy to deep learning before Alexnet. There were some interesting results (such as in phoneme recognition), but most researchers in the community at that time did not anticipate that deep learning would change the field so much. It took a culmination of many different threads of research to hit the breakthrough: larger datasets, better computation, and a series of small but important changes to neural networks, such as ReLU and better initialization.\n\nRouting networks are still in the phase before the threads come together. For example, one current challenge is finding a problem setting where routing models have a distinct advantage over standard neural networks. This best problem setting may be a low latency embedded setting [3], or it may be a setting where one wants to train the largest model possible [4]. Different researchers have been exploring these various problem settings (the analogy is constructing a large dataset, Imagenet, which let neural networks shine). Another important but orthogonal direction is improving routing models themselves (the analogy is ReLU and better initialization). Our work falls in this area of research, where we explore diversity and also question if current techniques are sufficient for achieving scale. The third line of research, analogous to computation, is the recent push in machine learning systems like Tensorflow for better support of model parallelism and sparse computation in general. Thus, despite there being a lack of game-changing results achieved by routing models, we believe that if the community continues to push along these directions, the threads may come together and we may find a big success down the line. \n\n\nReferences\n------------------\n\n[1] Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., & Chen, Z. (2018). GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. arXiv preprint arXiv:1811.06965.\n[2] Arora, S., Cohen, N., & Hazan, E. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509.\n[3] Teerapittayanon, S., McDanel, B., & Kung, H. T. (2016, December). Branchynet: Fast inference via early exiting from deep neural networks. In Pattern Recognition (ICPR), 2016 23rd International Conference on (pp. 2464-2469). IEEE.\n[4] Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017). Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.", "@Reviewer 1: I disagree with the statement that the omniglot results do not seem fair because of parameter size. The point here is that parameter count is not particularly important for a problem with as few training samples as Omniglot. On the contrary - large models can overfit more easily and consequently generalize less. However, I would be interested to get the authors's sense of why this does not happen here ...?\n\n@Reviewer 3: Unfortunately, I very much share Reviewer 3's sentiments. If the paper contained a more thorough experimental analysis on a wider range of datasets, I think that a very high acceptance score would have been adequate. Without, I am not sure as to how the paper scales to different problems.\n\n( I originally thought I could comment on other reviews directly, hence this format. I apologize if it causes any confusion.)", "Thank you for the comments.\n\n\n>>> re: learning behavior over epochs\n\nWe have found that the training behavior is surprisingly unremarkable. We had initially hypothesized that there would nontrivial steps in the loss curve when the model learns to correctly route a whole class of examples, but in practice, the loss curve is smooth. This finding is especially interesting given that the routing depth experiments demonstrate that there are optimization difficulties. One interpretation may be that the loss landscape is smooth but increasing the routing depth adds many more suboptimal minima / saddle points that the optimization procedure can get stuck in.\n\n\n>>> re: “why are the different experiments performed with different datasets”\n\nWe want to clarify that we did not cherrypick experiments. Our very first experiments on diversity were on Omniglot due to the strong architecture search baseline of Liang et al. (2018). We then chose CIFAR-10 for the depth experiments because it is a standard dataset where the community has determined that increased depth and model size consistently improves results. Thus, we can clearly identify if optimization is problematic. It’s unclear if this property holds for Omniglot since overfitting of large models may play a factor in poor performance, making disentangling the effects of optimization difficult.\n\n\n>>> re: additional experiments that offer a transition from larger datasets to smaller ones\n\nWe have run experiments with a different style of model based on ResNet on an additional 4 datasets with an eye towards fair parameter count and hyperparameter tuning. The 4 datasets span a range of sizes. Please see Section 4.2.2 for more details and analysis. \n", "Thank you for the comments.\n\n\n>>> re: “OmniGlot comparisons seem not fair as the model capacity is not added as part of the table which raises concerns on achieving state of the art with more high complexity models than routing mechanism.”\n\nLiang et al. (2018), who holds the previous state-of-the-art, state that their best models have 3M parameters. Our routing model has 1.8M parameters. Thus, a higher parameter count cannot explain the difference. Furthermore, the architecture search method of Liang et al. (2018) has the ability to significantly modify the parameter count, so a lower parameter count also cannot explain the difference since small models were available in the search space. We have clarified this point in the text. \n\n\n>>> re: additional experiments on more datasets\n\nWe have run experiments with a different style of model based on ResNet on an additional 4 datasets with an eye towards fair parameter count and hyperparameter tuning. Please see Section 4.2.2 for more details and analysis. \n", "Thank you for the comments.\n\n\n>>> re: additional experiments about architectural diversity\n\nWe have run experiments with a different style of model based on ResNet on an additional 4 datasets with an eye towards fair parameter count and hyperparameter tuning. Please see Section 4.2.2 for more details and analysis. \n\n\n>>> re: additional experiments about depth\n\nWe are currently in the middle of running additional experiments comparing depth, and will update the paper once these experiments complete. The setup compares ResNet-50 and ResNet-101 models with routing. Preliminary results in this setup also suggest that increased routing depth makes models harder to train. For example, on the Food101 dataset with 75K examples, the ResNet-101 routing model achieves around a 5% worse accuracy than the ResNet-50 routing model (both models have been hyperparameter tuned the same amount). This result cannot be explained by overfitting, because Food101 has more examples than CIFAR-10, where larger models with sufficient regularization outperform smaller models. We use aggressive Imagenet-style data augmentation which should provide sufficient regularization. The accuracy drop of deeper routing models is so far consistent across all the datasets we have evaluated.\n\n\n>>> re: “the fundamental limitations of all routing models in this regard”\n\nWe want to clarify that we do not believe routing models are fundamentally limited with respect to depth. Our assertion is that current routing methods do not perform well when depth is scaled up. In order for routing models to succeed, this flaw must be fixed. We draw an analogy to sequence models such as RNNs. For many years, training sequence models with long sequence lengths was impossible. However, as a consequence of numerous recent advances, sequence models can now scale to impressive sequence lengths (e.g., [1] successfully trains with a sequence length of 11000). In the same vein, we hope our analysis of depth in our work will spur discoveries of new routing techniques that will overcome the depth scaling problem.\n\n[1] Liu, Peter J, Saleh, Mohammad, Pot, Etienne, Goodrich, Ben, Sepassi, Ryan, Kaiser, Lukasz & Shazeer, Noam. “Generating wikipedia by summarizing long sequences”. ICLR 2018.", "Overall, this is a valuable read. Authors tackle the head on problem of what is a good architecture where we can having routing with diverse models. The papers is written well, with comparisons to mixture of experts, other models that tackled this problem with either homogeneous architectures or static architectures. Below is my assessment on various axis:\n\nQuality - Enough experiments to justify some conclusions, equations helped ground the method with math.\n\nclarity - Very well written, good figures and analysis.\n\noriginality - While the authors achieve SOA results on OmniGlot and do explore a few options, I feel the work still lacks originality in the formulation or does not have original contributions to either the architectures used or the optimization procedures employed.\n\nsignificance - very significant to look at this problem both in terms of compute, accuracy perspective as well as scaling these networks for multiple tasks.\n\npros - thorough analysis, even the negative experiments are well written and throw more light into the problem space.\n\ncons - OmniGlot comparisons seem not fair as the model capacity is not added as part of the table which raises concerns on achieving state of the art with more high complexity models than routing mechanism. Will be great to move from CIFAR-10 and test things on CIFAR-100 to really see the value of proposed work. I would recommend a higher rating if authors address these two concerns.", "The major contribution of this work is extending routing networks (Rosenbaum et al., ICLR 2018) to use diverse architectures across routed modules. I view this as an important contribution and am very impressed by the experiment on Omniglot where it shows big performance gain on a split with very few examples. This idea of incorporating in architectural bias and not just parameter bias for small data problems is very compelling and intuitive to me on the surface. The ablation study was also very interesting in this regard. I really like the discourse and found it to be filled with interesting insights throughout ranging from the connection between routing networks and neural architecture search to the heuristic for selecting k. However, after the great discourse, I was quite disappointed by the breadth of the experiments. \n\nThe paper is positioned as exploring two parallel ideas that are independently interesting 1) diversity in the architecture of modules in routing models 2) the effect of increasing depth in routing models. For the first idea, this is shown very well by the Omniglot experiment but is not evaluated in any other setting. Showing this in a few other experiments would have really driven this point home in my opinion. The second idea is not really executed in a convincing way to me. The authors call it a ‘negative result’ in the end, but I’m not sure I really feel like I learned anything from this experiment. I wonder about statistical significance. I also feel like the authors are trying to turn it into a commentary that this is a pain point for all variants of routing models while they only actually tried it for their proposed architecture which makes quite a few decisions along the way. I would have liked to see more model variants and datasets before really feeling like I can make any empirical determinations about the fundamental limitations of all routing models in this regard. Additionally, if there were such a fundamental scaling limitation, you would imagine that an experiment could be constructed that really highlighted this fact where all routing models do way worse.\n\nIn short, I think there are some really good idea in this paper and vote for acceptance on that basis. Had the authors provided more empirical evidence about architectural diversity, I would have given it a very high score. The analysis of depth is also a very interesting topic, but it could possibly even serve as another paper considering that the current results don’t really come to concrete conclusions for the community. \n" ]
[ 7, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2019_BkxWJnC9tX", "S1eyvl60C7", "SJeR4arqRQ", "ryxf_zfv6X", "rygVK_xR2m", "S1lhnTr5hX", "iclr_2019_BkxWJnC9tX", "iclr_2019_BkxWJnC9tX" ]
iclr_2019_Bkxbrn0cYX
Selfless Sequential Learning
Sequential learning, also called lifelong learning, studies the problem of learning tasks in a sequence with access restricted to only the data of the current task. In this paper we look at a scenario with fixed model capacity, and postulate that the learning process should not be selfish, i.e. it should account for future tasks to be added and thus leave enough capacity for them. To achieve Selfless Sequential Learning we study different regularization strategies and activation functions. We find that imposing sparsity at the level of the representation (i.e. neuron activations) is more beneficial for sequential learning than encouraging parameter sparsity. In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition. It results in few active neurons which in turn leaves more free neurons to be utilized by upcoming tasks. As neural inhibition over an entire layer can be too drastic, especially for complex tasks requiring strong representations, our regularizer only inhibits other neurons in a local neighbourhood, inspired by lateral inhibition processes in the brain. We combine our novel regularizer with state-of-the-art lifelong learning methods that penalize changes to important previously learned parts of the network. We show that our new regularizer leads to increased sparsity which translates in consistent performance improvement on diverse datasets.
accepted-poster-papers
Two of the reviewers raised their scores during the discussion phase noting that the revised version was clearer and addressed some of their concerns. As a result, all the reviewers ultimately recommended acceptance. They particularly enjoyed the insights that the authors shared from their experiments and appreciated that the experiments were quite thorough. All the reviewers mentioned that the work seemed somewhat incremental, but given the results, insights and empirical evaluation decided that it would still be a valuable contribution to the conference. One reviewer added feedback about how to improve the writing and clarity of the paper for the camera ready version.
train
[ "rJeJwe0d3m", "Bkgrm1dmkV", "SJeKGqFth7", "SJgEC5nlkV", "SygxmvXCAQ", "rJeWYmDAnm", "Hkl7uQV9C7", "rke9cE2NAX", "B1xX3QhE0m", "SklPsoiEAm" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "[REVISION]\nThe work is thorough and some of my minor concerns have been addressed, so I am increasing my score to 6. I cannot go beyond because of the incremental nature of the work, and the very limited applicability of the used continual learning setup from this paper.\n\n[OLD REVIEW]\nThe paper proposes a novel, regularization based, approach to the sequential learning problem using a fixed size model. The main idea is to add extra terms to the loss encouraging representation sparsity and combating catastrophic forgetting. The approach fairs well compared to other regularization based approaches on MNIST and CIFAR-100 sequential learning variants.\n\nPros:\nThorough experiments, competitive baselines and informative ablation study.\nGood performance on par or superior to baselines.\nClear paper, well written.\n\nCons:\nThe approach, while competitive in performance, does not seem to fix any significant issues with baseline methods. For example, task boundaries are still used, which limits applicability; in many scenarios which do have a continual learning problem there are no clear task boundaries, such as data distribution drift in both supervised and reinforcement learning.\nSince models used in the work are very different from SOTA models on those particular tasks, it is hard to determine from the paper how the proposed method influences these models. In particular, it is not clear whether these changes to the loss would still allow top performance on regular classification tasks, e.g. CIFAR-10 or MNIST even without sequential learning, or in multitask learning settings. \n\nSummary:\nAlthough the work is substantial and experiments are thorough, I have reservations about extrapolating from the results to settings which do have a continual learning problem. Although I am convinced results are slightly superior to baselines, and I appreciate the lengthy amount of work which went into proving that, the paper does not go sufficiently beyond previous work.\n", "Thank you for checking the figures, kindly, we would like to draw your attention to the neuron importance experiment we newly added in Table 3. Accounting for neuron importance played a crucial role in reducing interference and even when not using a LLL regularizer. ", "This paper deals with the problem of catastrophic forgetting in lifelong learning, which has recently attracted much attention from researchers. In particular, authors propose the regularized learning strategies where we are given a fixed network structure (without requiring additional memory increases in the event of new task arriving) in the sequential learning framework, without the access to datasets of previous tasks. Performance comparisons were performed experimentally against diverse regularization methods including ones based on representation, based on parameter itself, and the superiority of representation-based regularization techniques was verified experimentally. Based on this, authors propose a regularization scheme utilizing the correlation between hidden nodes called SNI and its local version based on Gaussian weighting. Both regularizers are even extended to consider the importance of hidden nodes. Through MNIST, CIFAR, and tiny Imagenet datasets, it has been experimentally demonstrated that the proposed regularization technique outperforms state-of-the-art in sequential learning.\n\nIt is easy to follow (and I enjoyed the way of showing their final method, starting from SNI to SLNI and importance weighting). Also it is interesting that authors obtained meaningful results on several datasets beating state-of-the-arts based on very simple ideas.\n\nHowever, given Cogswell et al. (2015) or Xiong et al. (2016), it seems novelty is somehow incremental (I could recognize that this work is different in the sense that it considers local/importance based weighting as well as penalizing correlation based on L1 norm). Moreover, there is a lack of reasoning about why representation based regularization is more effective for life-long learning setting. Figure 1 is not that intuitive and it does not seem clearly describe the reasons. \n\nMy biggest concern with the proposed regularization technique is the importance of neurons in equation (6). It is doubtful whether the importance of activation of neurons based on \"current data\" is sufficiently verified in sequential learning (in the experimental section, avg performance for importance weight sometimes appears to come with performance improvements but not always). It would be great if authors can show some actual overlaps of activations across tasks (not just simple histogram as in Figure 5). And isn't g_i(x_m) a scalar? Explain why we need the norm when you get alpha.\n\nIt would be nice to clarify what the task sequence looks like in Figure 2. It is hard to understand that task 5, which is the most recent learning task, has the lowest performance in all tasks.\n\n-----------------------------------------------------------------------------------------------------\n- On figure 4: I knew histograms are given in figure 4 (I said figure 5 mistakenly, but I meant figure 4). But showing overlap patterns across tasks (at different layers for instance) might be more informative. \n- On figure 2: It looks weird to me because last task has the lowest accuracy even for ReLU (sequential learning w/o regularization); tuning for task 5 will lead catastrophic forgetting for previous tasks, meaning acc for task 1 be the lowest?\n\n-----------------------------------------------------------------------------------------------------\n- My concerns about figures are solved; I want to thank authors for their efforts.", "Thank you so much, we will try to shorten and move the figures in a final version.", "To confirm the behavior of a network trained without LLL method (MAS) , we run the ReLU baseline without MAS on the sequence of 5 permuted mnist tasks and obtained the following accuracies at the end of the sequence:\nFinetuning (ReLU NoMAS)= [40.79, 49.16, 72.5, 86.56, 97.08]\nCompared to:\nReLU (with MAS)=[95.8, 93.66, 93.32, 89.95, 89.89]\nSLNID( Ours)=[96.46, 96.25, 95.86, 95.81, 94.77]\n\nAs the reviewer suggested Finetuning (ReLU without MAS) has a better performance on the last task while forgetting severely the first task. However, as we mentioned earlier all our baselines in Figures 2 &3 run with MAS as a LLL method.\n", "REVISION AFTER REBUTTAL\nWhile the revision does not address all of my concerns about clarity, it is much better. I still think that the introduction is overly long and the subsequent sections repeat information; if this were shortened there could be room for some of the figures that are currently in appendix. I appreciate the new figures; I think that it would be great if especially figure 10 were included in the main paper. \nI agree with the other two reviewers that the work is somewhat incremental, but the differences are well explained, the experimental results are interesting (particularly the differences of parameter vs representation-based sparsity, and the plots in appendix showing neuron importance over tasks), and the progression from SNI to SLNID is well-presented. I think overall that this paper is a good contribution and I recommend acceptance. I have updated my review to a 7. \n===============\n\"Activations\" \"Representation\" and \"Outputs\" are used somewhat interchangably throughout the work; for anyone not familiar it might be worth mentioning something about this in the intro.\n \nProblem setting is similar to open set learning (classification); could be worth mentioning algorithms for this in the related work which attempt to set aside capacity for later tasks.\n\nResults are presented and discussed in the introduction, and overall the intro is a bit long, resulting in parts later being repetitive.\n\nWorth discussing sparsity vs. distributed representations in the intro, and how/where we want sparsity while still having a distributed representation.\n\nShould be made clear that this is inspired by one kind of inhibition, and there are many others (i.e. inhibition in the brain is not always about penalizing neurons which are active at the same time, as far as I know)\n\nChanges in verb tense throughout the paper make it hard to follow sometimes. Be consistent about explaining equations before or after presenting them, and make sure all terms in the equation are defined (e.g. SNI with a hat is used before definition). Improper or useless \"However\" or \"On the other hand\" to start a lot of sentences.\n\nFigure captions could use a lot more experimental insight and explanation - e.g. what am I supposed to take away from Figure 10 (in appendix B4), other than that the importance seems pretty sparse? It looks to me like there is a lot of overlap in which neurons are important or which tasks, which seems like the opposite of what the regularizer was trying to achieve. This is a somewhat important point to me; I think this interesting and I'm glad you show it, but it seems to contradict the aim of the regularizer.\n\nHow does multi-task joint training differ from \"normal\" classification? The accuracies especially for CIFAR seem very low.\n\nQuality: 7/10 interesting and thoughtful proposed regularizer and experiments; I would be happy to increase this rating if the insights from experiments, especially in the appendix, are a bit better explained\nClarity: 6/10 things are mostly clearly explained although frequently repetitive, making them seem more confusing than they are. If the paper is reorganized and the writing cleaned up I would be happy to increase my rating because I think the work is good. \nOriginality: 8/10 to my knowledge the proposed regularizer is novel, and I think think identifying the approach of \"selfless\" sequential learning is valuable (although I don't like the name)\nSignificance: 7/10 I am biased because I'm interested in LLL, but I think these problems should receive more attention.\n\nPros:\n - proposed regularizer is well-explained and seems to work well, ablation study is helpful\n\nCons:\n - the intro section is almost completely repetitive of section 3 and could be significantly shortened, and make more room for some of the experimental results to be moved from the appendix to main text\n - some wording choices and wordiness make some sentences unclear, and overall the organization and writing could use some work\n\nSpecific comments / nits: (in reading order)\n1. I think the name \"selfless sequential learning\" is a bit misleading and sounds like something to do with multiagent cooperative RL; I think \"forethinking\" or something like that that is an actual word would be better, but I can't think of a good word... maybe frugal? \n2. Mention continual/lifelong learning in the abstract\n3. \"penalize changes\" maybe \"reduce changes\" would be better?\n4. \"in analogy to parameter importance\" cite and explain parameter importance\n5. \"advocate to focus on selfless SL\" focus what? For everyone doing continual learning to focus on methods which achieve that through leaving capacity for later tasks? This seems like one potentially good approach, but I can imagine other good ones (e.g. having a task model)\n6. LLL for lifelong learning is defined near the end of the intro, should be at the beginning when first mentioned\n7. \"lies at the heart of lifelong learning\" I would say it is an \"approach to lifelong learning\"\n8. \"fixed model capacity\" worth being specific that you mean (I assume) fixed architecture and number of parameters\n9. \"those parameters by new tasks\" cite this at the end of the sentence, otherwise it is unclear what explanation goes with which citation\n10. \"hard attention masks, and stored in an embedding\" unclear what is stored in the embedding. It would be more helpful to explain how this method relates to yours rather than just describing what they do.\n11. I find the hat notation unclear; I think it would be better just to have acronyms for each setting and write out the acronyms in the caption\n12.\"richer representation is needed and few active neurons can be tolerated\" should this be \"more active neurons\"?\n13. Comparison with state of the art section is repetitive of the results sections", "On figure 4: I knew histograms are given in figure 4 (I said figure 5 mistakenly, but I meant figure 4). But showing overlap patterns across tasks (at different layers for instance) might be more informative. \n\nIn figure 8&9&10, we have shown the important neurons in the first layer after learning each task, color coded with the task id. First task important neurons are in Blue, second task important neurons in orange and 3rd task important neurons in green. Figure 11 & 12&13 show the important neurons ordered with respect to their importance estimated at the first task. It can be seen from these figures how neurons are re-reused (overlapped) and other are newly activated for each new task. If the reviewer has other suggestions we will be eager to add.\n\n- On figure 2: It looks weird to me because last task has the lowest accuracy even for ReLU (sequential learning w/o regularization); tuning for task 5 will lead catastrophic forgetting for previous tasks, meaning acc for task 1 be the lowest?\n\n All the baselines and compared methods in Figure 2 and 3 have importance weight regularizer (MAS), hence forgetting is minimized in all compared methods, accuracy in task 1 is preserved while scarifying accuracy in task 5. ReLU (sequential learning w/o regularization): we main no additional sparsity regularizer. We thank the reviewer for pointing this out, we have clarified it in the revised version. Note that our regularizer improves 4-8% over No-Reg that uses already MAS as LLL method.\nWe will be happy to clarify any other points.\n", "We thank AnonReviewer1 for their suggestions and comments.\nNote that we revised the paper, and renamed our full model to SLNID. Below are our comments to the main points:\n\n1) Task boundaries are still used, which limits applicability; in many scenarios which do have a continual learning problem there are no clear task boundaries, such as data distribution drift in both supervised and reinforcement learning.\n\n We agree with the reviewer on the importance of the mentioned setting where there are no clear task boundaries and distribution gradually drifts. Although this is orthogonal to the contribution of this work, we tested a setup where the data distribution drifts between tasks. When evaluating in this setting, we find, interestingly, that our proposed SNLID again works well in this setting compared to the LLL approach MAS (Aljundi et al.,2017), which benefits from hard task boundaries. Details can be found in the revised paper in Section 4.2 and Table 3. We believe this is an interesting setting to study further in future work.\n\n2) Since models used in the work are very different from SOTA models on those particular tasks, it is hard to determine from the paper how the proposed method influences these models. In particular, it is not clear whether these changes to the loss would still allow top performance on regular classification tasks, e.g. CIFAR-10 or MNIST even without sequential learning, or in multitask learning settings. \n Improving the state of the art results on non sequential scenarios is not the aim of this proposed regularizer. Further, the studied setup of LLL where data from previous or future task is not available during the training of a given task is much harder and challenging than joint training or multi task training where all data is available at training time. In sec 4.2 Table 2 we compare against and outperform state of the art LLL methods under the same setting and models used in those methods. \n", "We thank AnonReviewer2 for their constructive comments, below is our reply to the main points. Note that we revised the paper, and renamed our full model to SLNID.\n\n1) Reasoning about why representation based regularization is more effective for life-long learning setting. \n\nPlease check our updated version.\n\n2) Importance of neurons in equation (6)\n\nThe importance of the neurons for a previous task is computed based on that previous task data right after training that task. This is in line with estimating the importance weight in LLL methods. While on permuted mnist, the neurons importance doesn’t seem crucial to achieve the best performance, it improves the performance on Cifar, Tiny Imagenet, and the 8 Object recognition sequence. In fact, permuted mnist is a simple scenario we use to compare all the studied methods in a setting where the differences between tasks are easily identified. The full permutation requires the network to instantiate a new representation for the new task that associates new collections of pixels to digits patterns. In such a simple case, the importance of the neuron doesn’t seem a crucial factor while in more complicated sequence where tasks overlap and relatedness is much higher the neuron importance term is a key component. In Sec2.4 Table3 we again compare our regularizer with and without neurons importance, \nboth when evaluating the average performance using each task model and when using the last trained model. While both SLNI with and without neurons importance improve the individual models accuracy (73.03 and 72.14 respectively), the performance at the end of the sequence (using the lastly trained model) significantly drops for SLNI without neurons importance (72.14 to 63.54) compared to SLNI with the neuron importance (73.03 to 70.75). This is a clear indication of the role of neurons importance in the sequential learning scenario in excluding previously important neurons from the penalty and hence avoiding interference between tasks.\n\n 3) It would be great if authors can show some actual overlaps of activations across tasks (not just simple histogram as in Figure 5).\n\nFigure 4, bottom, shows the histogram of the mean activation on the first task achieved by each method. Figures 8 & 9 & 10 in the Appendix show the neurons importance after each task. It can be seen how new activations are initiated while reusing previous neurons. Also Figures 11&12&13, newly added, show the importance of the neurons sorted by the importance computed at the first task. It can be clearly seen which neurons are re-used and which are getting activated for the new task.\n\n4) And isn't g_i(x_m) a scalar? Explain why we need the norm when you get alpha.\n\nIn case of neurons in fully connected layers, g_i(x_m) is indeed a scalar. In the convolutional layers, importance of neurons is the norm of the gradient vector. While we only consider fully connected layers in this work, this was for sake of generality. \nFurther, while estimating the importance, gradients are accumulated from different samples. We want to estimate how much a change in the previous task could happen when changing this neuron’s output. We are not interested in the sign of the change itself, hence we accumulate the absolute value of the gradients from different samples. \nFor sake of clarity, we replaced the norm with absolute value sign in the new version.\n\n5) It would be nice to clarify what the task sequence looks like in Figure 2. It is hard to understand that task 5, which is the most recent learning task, has the lowest performance in all tasks.\n\nIn Figure 2, the sequence is first task mnist and remaining tasks permuted mnist with different permutations. Training individual models, results in similar accuracy in each task. In the sequential setting, the last task is the most recent task. The model has to learn this task without forgetting all the previous tasks. As such, little capacity is left for the very last task. This is a known phenomenon in Lifelong learning and explained in Section 2 second paragraph. For this reason our regularizer always achieves the best performance on the last task in the sequence as it aims at leaving capacity for later tasks. Also as mentioned in Section 4.1, Experimental Setup, we have used a high value of (\\lambda_omega) that ensures the least forgetting which allows us to test the effect on the later tasks performance. For example, in the experiments Section 4.6, the accuracy on the last task for Finetuning is 90.0% (as it forgets completely the previous tasks and only cares about the last task) while for MAS it is 68.2%. Our regularizer improves the accuracy on the last task to 77%, as more capacity is left for the last task. In the paper we only report the avg.acc at the end of the sequence due to space limits.\n", "We thank AnonReviewer3 for their constructive comments, below is our reply to the main points.\nNote that we revised the paper, and renamed our full model to SLNID.\n\n1) Changing the hat notation:\n\nFollowing the suggestion, we adapted the naming as follows an used them throughout the paper. Note that SLNID now corresponds to the complete version of our regularizer:\n- Sparse coding through Neural Inhibition (SNI)\n- Sparse coding through Local Neural Inhibition (SLNI)\n- Sparse coding through Local Neural Inhibition and Discounting (SLNID)\n\n2) Results are presented and discussed in the introduction, and overall the intro is a bit long, resulting in parts later being repetitive.\n\nPlease check our updated version.\n\n3) Worth discussing sparsity vs. distributed representations in the intro, and how/where we want sparsity while still having a distributed representation.\n\nPlease check our updated version.\n\n4) Figure captions could use a lot more experimental insight and explanation - e.g. Figure 10 (in appendix B4)\nWe have updated the figures captions accordingly.\n\nFrom Figure 8 & 9 & 10 we can deduce two main points:\n- The important neurons are sparse, SLNID tolerates more active neurons than SNID.\n- With each new task, new neurons are getting used and become important (Figure 9 & 10) .\nFigures 11&12&13, newly added, where neurons are sorted w.r.t. their importance for the first task, show how new neurons are becoming important for the new tasks.\nPrevious important neurons are also reused for the new tasks. This is not against our regularizer. Our regularizer avoids inhibiting neurons from previous tasks by excluding them, so they can be used freely (Equation 7, section 3.3). The LLL regularizer (Equation 1) ensures that their connections are not being changed drastically and hence performance preserved in previous tasks. So, both, achieving sparsity to leave space for future tasks and sharing important neurons, whenever possible, allowing forward transfer, are actually goals of our regularizer.\n\n5) How does multi-task joint training differ from \"normal\" classification? The accuracies especially for CIFAR seem very low.\n\nThe shown performance of joint training represents the average accuracy achieved on each task by masking out classifier scores of the other tasks when computing the arg max. However, the training was done using a shared 100-dimensional classification layer. We use a small network with only 128 or 256 neurons in the hidden layer, training it for 50 epochs with SGD optimizer and a learning rate of 0.01. No dropout was used, no batch normalization and no data augmentation. Our aim was to set a fair comparison between different regularizers without the interference of other mechanisms. We did not aim for state of the art results on learning jointly a dataset. \n" ]
[ 6, -1, 6, -1, -1, 7, -1, -1, -1, -1 ]
[ 5, -1, 4, -1, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2019_Bkxbrn0cYX", "SJeKGqFth7", "iclr_2019_Bkxbrn0cYX", "rJeWYmDAnm", "Hkl7uQV9C7", "iclr_2019_Bkxbrn0cYX", "SJeKGqFth7", "rJeJwe0d3m", "SJeKGqFth7", "rJeWYmDAnm" ]
iclr_2019_BkzeUiRcY7
M^3RL: Mind-aware Multi-agent Management Reinforcement Learning
Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.
accepted-poster-papers
The paper addresses a variant of multi-agent reinforcement learning that aligns well with real-world applications - it considers the case where agents may have individual, diverging preferences. The proposed approach trains a "manager" agent which coordinates the self-interested worker agents by assigning them appropriate tasks and rewarding successful task completion (through contract generation). The approach is empirically validated on two grid-world domains: resource collection and crafting. The reviewers point out that this formulation is closely related to the principle-agent problem known in the economics literature, and see a key contribution of the paper in bringing this type of problem into the deep RL space. The reviewers noted several potential weaknesses: They asked to clarify the relation to prior work, especially on the principle-agents work done in other areas, as well as connections to real world applications. In this context, they also noted that the significance of the contribution was unclear. Several modeling choices were questioned, including the choice of using rule-based agents for the empirical results presented in the main paper, and the need for using deep learning for contract generation. They asked the authors to provide additional details regarding scalability and sample complexity of the approach. The authors carefully addressed the reviewer concerns, and the reviewers have indicated that they are satisfied with the response and updates to the paper. The consensus is to accept the paper. The AC is particularly pleased to see that the authors plan to open source their code so that experiments can be replicated, and encourages them to do so in a timely manner. The AC also notes that the figures in the paper are very small, and often not readable in print - please increase figure and font sizes in the camera ready version to ensure the paper is legible when printed.
test
[ "Hye-Lo29hm", "SyggSiwZp7", "HylgEnCY0X", "B1xIv7jKCQ", "ryl9TkfK0m", "rJlQL4SGC7", "S1lrcUBMA7", "H1lU9Hrf0X", "HkgbGMxEnm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThis paper studies the problem of generating contracts by a principal to incentive agents to optimally accomplish multiagent tasks. The setup of the environment is that the agents have certain skills and preferences for activities, which the principal must learn to act optimally. The paper takes a combined approach of agent modeling to infer agent skills and preferences, and a deep reinforcement learning approach to generate contracts. The evaluation of the approach is fairly thorough.\n\nThe main novel contribution of the paper is to introduce the principal-agent problem to the deep multiagent reinforcement learning literature.\n\nMy concerns are:\n- The paper should perform a literature search on related work from operations research, including especially principal-agent problems, which are not currently surveyed, and perhaps also optimal scheduling problems.\n- How do the problems introduced either map onto real applications or map onto environments studied in existing literature (such as in operations research)?\n- More details should be given on the mind tracker module.\n- Is it necessary to use deep reinforcement learning for contract generation? If the agent modeling is good, the optimal contracts look like they are probably simple to compute directly in the environments studied.\n\nOverall, the paper is somewhat interesting and relatively technically sound, but the contribution seems marginal. The problems studied seem pulled out a hat, when they could be situated in specific existing literature.\n", "Summary:\n\nThis paper proposes a way to train a manager agent which would manage a bunch of worker agents to achieve a high-level goal. Each worker has its own set of skills and preferences and the manager tries to assign sub-tasks to these agents along with bonuses such that the agents can even perform tasks that are not preferred by them. Authors achieve this by training a manager which tracks the skills and preferences of the agents on the fly. Authors have done an extensive analysis of the proposed approach in two simple domains: resource collection and crafting.\n\nMajor comments:\n\nThis paper focuses on multi-agent settings with self-interested agents. The problem formulation and the solution are novel enough. Experiments are on toy domains with very few goals and sub-task dependencies. However, authors have done a good job in doing an extensive analysis of the proposed approach.\n\n1.\tCan you comment about the scalability of the proposed solution when the number of possible subtasks increases? When the sub-task dependency graph size increases?\n\n2.\tWhat is the reason for using rule-based agents in all the experiments? It would have been more useful if all the analysis are done with RL agents rather than rule-based agents. It would also make the paper stronger.\n\n3.\tAre the authors willing to release the code? Overall the model looks complicated and the appendix is not sufficient to reproduce the results in the paper. I would increase my rating if the authors are willing to release the code to reproduce all the results reported in the paper.\n\n\nMinor comments:\n\n1.\tPage 3, line 9: “typical” -> “typically”\n2.\tPage 3, “intention” section: “Based on the its reward ..” Check grammar.\n3.\tPage 5, last line: “the total quantitative is 10” check grammar.\n4.\tPage 8, conclusions, second line: “nad” -> “and”\n5.\tPage 8, conclusions, 4th line: “combing” -> “combine”\n", "I am ok with the rebuttal. Even though it would have been interesting to have neural net based agents, I understand authors' computational constraints.\n\nI am increasing my rating from 6 to 7.", "Thanks a lot for your rebuttal. We have gone over your rebuttal/revised submission.", "We thank all reviewers for their comments and suggestions. We have revised our submission and also plan to release our code. In the revision, we i) discussed the connection between our work and principal-agent problems including how our work differs from the traditional approaches in classical principal-agent problems, ii) added implementation details about the mind tracker module, and iii) fixed the typos.", "Thank you for your reviews. Here are our responses to your questions:\n\n1. Clarify how the skills of agents play a role in the problem setup\nWe clarify the definition of skills and how it influences the manager’s decision as follows. \n\ni) As defined in Section 3, an agent’s skill depends on its state transition probabilities and its policy. The state transition probabilities define if a resource can be collected by an agent (i.e., whether the “collect” action executed by this agent will have real effect), and it is equivalent to a binary value for each resource in Resource Collection. The agent’s skill also depends on its policy because it affects how fast an agent can achieve a goal. E.g., when the agent has a suboptimal policy, it may not be able to reach a goal within the time limit even though it actually can collect the resource if given more time.\n\nii) The skills are completely hidden from the manager. It can be inferred by the manager based on the performance history, and also on the estimated worker policies by IL. However, only checking whether a goal is reached is not sufficient to determine skills. Failing to reach a goal may be a result of several reasons -- it may be because i) the bonus in the contract is too low, ii) the contract terminates prematurely before the agent can reach the goal, or iii) the assigned task depends on another task which has not been finished yet. So the manager needs to infer agents’ skills, preferences, and the task dependency jointly through multiple trials.\n\n2. Is maximizing utility justified?\nMaximizing utility is actually the setup in similar problems in economics. Just like those problems (e.g., mechanism design), this paper focuses on scenarios where agents won’t truthfully or clearly reveal its skills and preferences to the manager, and do not always behave optimally. As we stated in the paper, maximizing utility is more realistic, and typically the span of the decision making process of the manager is much shorter than the time needed for improving worker agents. Let’s consider a simple scenario. An agent is unable to collect a certain kind of resource. By maximizing its utility, it may still accept the contract and go to that resource. Once a resource is occupied by this agent, other agents can no longer collect it according to our setting. This means that the resource will never be really collected.\n\nAs an empirical evidence, you may compare the S2 and S3 settings with S1 in Resource Collection. In S2 and S3, workers may prefer a task that it can not perform, which should never happen in the case of maximizing return. As a result (shown in Figure 4b and Figure 4c), the training difficult significantly increases.\n\n3. Are there alternate ways to overcome maintaining the UCB explicitly, especially for the number of time-steps? \nYes, there are ways to overcome this. First, we can define small time intervals instead of maintaining statistics for each step (i.e., combining statistics in every dT consecutive steps will reduce the complexity to 1 / dT of the original size). Note that this has been done in results shown in Appendix C.1, where dT also means that for every dT steps, the manager can only change the contracts once. Second, we may define a maximum number of steps to be considered in the performance history, which can be determined by the upper bound of the execution time for a subtask, and can be smaller than the step limit of the whole episode.\n\n4. What are the units for rewards in the plots?\nIt is the average per episode. The reward is defined as in Section 5.1.1 and Section 5.1.2 without any rescaling. We have added this in the caption.\n\n5. Typos\nThank you for pointing out these typos. We will fix them in the next revision.\n", "Thank you for your reviews and comments. We address your questions as follows.\n\n1. Scalability of the proposed solution\nFrom our current results, you may see that our approach has a decent scalability -- even though we doubled the subtasks and also introduced additional dependency in Crafting compared to Resource Collection, it does not need much more episodes for converging to optimal policies, where our agent-wise exploration plays an important role. Generally speaking, deploying more present workers coupled with our agent-wise exploration should significantly improve the learning efficiency and overcome the challenges introduced from more substasks or a larger dependency graph. In addition, the computational complexity is linear in terms of the number of agents, so our approach is also scalable when there are more agents.\n\n2. What is the reason for using rule-based agents in all the experiments?\nWe have actually used RL agents as well (Appendix C.3), and it showed that our approach also works when workers are RL agents. In the main results, we focus on rule-based agents because it is computationally demanding to train a large population of RL agents, and our focus was not about the worker policies but rather how the manager assesses the workers’ mental states and encourages an optimal collaboration accordingly. In this paper, using a cheap rule-based implementation with randomness has demonstrated the effect of different components of our approach. \n\n3. Are the authors willing to release the code?\nYes, we do plan to open source our implementation. Specifically, the game environment and the worker agents were implemented in Python and it runs at a speed of more than 300 steps per second. We used PyTorch as the framework for implementing all the network modules. Typically it took < 10 hours to get a converged result by our approach on a single Nvidia Tesla V100 GPU.\n\n4. Typos\nThanks for pointing out these typos. We will fix them in the next revision.", "Thank you for your comments and suggestions. We respond to your questions and concerns as follows.\n\n1. Connection with principal-agent problems.\nThank you for pointing this out. We really appreciate it. The problem we address is indeed closely connected to principal-agent problems, or moral hazard problems in economics, which considers whether the agent makes the best choice for what the principal delegates (e.g., a plumber might make more money by suggesting an overhaul rather than a short-term fix). In this setting, there are a lot of issues to be modeled, e.g., information asymmetry between principals and agents, how to setup incentive cost, how to infer agents’ types and how to monitor their behaviors, etc. Traditional approaches [1] in economics build mathematical models to address these issues separately, leading to complicated models with many tunable parameters. In comparison, our paper provides a practical end-to-end computational framework to address this problem in a data-driven way, once the agents’ utility function is written down as a combination of principal’s request and its own preference (Eqn. 1). Moreover, this framework is adaptive to changes of agents’ preferences and capabilities, which very few papers in economics have addressed. \n\nBecause of the connection to principal-agent problems and the data-driven nature of the proposed method, there could be a broad number of practical applications.\n\nWe will incorporate a more thorough literature reviews in the next revision. \n\n[1] The theory of incentives: the principal-agent model, Jean-Jacques Laffont, 2001\n\n2. More details should be given on the mind tracker module.\nWe will explain more implementation details in the appendix in the next revision. We will also release the code.\n\n3. Is it necessary to use deep reinforcement learning for contract generation?\nAs stated in the introduction, one of the main points of this work is about incomplete information. I.e., we do not know the true agent models and their mental states, and also do not assume that the task dependency is known. In real world problems, we indeed can not assume that a manager knows the exact nature of other agents. So we want to train a manager that can quickly model worker agents through observations and simultaneously generate optimal contracts. In contrast, traditional methods do not consider task dependency, and usually assume agent types are either known or follow a given distribution. Also, deep models are flexible enough to handle complicated interactions between agents and changes of settings. Thus, deep RL is a more suitable approach than traditional methods under the incomplete information setting. \n", "This paper studies the problem of coordinating many strategic agents with private valuation to perform a series of common goals. The algorithm designer is a manager who can assign goals to various agents but cannot see their valuation or control them explicitly. The manager has a utility function for various goals and wants to maximize the total revenue. The abstract problem is well-motivated and significant and is an entire branch of study called algorithmic mechanism design. However often many assumptions have to be made to make the problem mathematically tractable. In this paper, the authors take an empirical approach by designing an RL framework that efficiently maximizes rewards across many episodes. Overall I find the problem interesting, well-motivated. The paper is well-written and contains significant experiments to support its point. However, I do not have the necessary background in the related literature to assess the significance of the methods proposed compared to prior work and thus would refrain from making a judgment on the novelty of this paper in terms of methodology. Here are some of my comments/questions to the author on this paper.\n\n\n(1) I want to clarify how the skills of the agents play a role in the problem setup. Does it show up in the expression for the manager's reward? In particular, does it affect the Indicator for whether a goal is completed Eq. (2) via a process that need not be explicitly modeled but can be observed via a feedback of whether or not the goal is completed? So in the case of resource collection example, the skill set is a binary value for each resource, whether it can be collected or not? \n\n(2) Related to the first point, the motivation for modeling the agents as maximizing their utility is the assumption that agents do not know their skills. I am wondering, is this really justified? Over the course of episodes, can the agents learn their skills based on the relationship between their intention and the goals they achieve? In the resource collection example, when they reach a resource and are not able to collect it, they understand that they do not have the corresponding skill. Is there a way to extrapolate the results from this paper to such a setting? \n\n(3) I am slightly concerned about the sample complexity of keeping track of the probability of worker i finishing goal g within t steps with a bonus b. This scales linearly in parameters which usually would be large (such as the number of time-steps). Are there alternate ways to overcome maintaining the UCB explicitly, especially for the number of time-steps? \n\nSome minor comments on the presentation.\n\n(1) What are the units for rewards in the plots? Is it the average per episode reward? It would be good to mention this in the caption.\n\n(2) There are a few typos in the paper. Some I could catch was,\n\n- Last line in Page 5: \"quantitative\" -> \"quantity\"\n- Page 8: skills nad preferences -> skills and preferences\n- Page 8: For which we combining -> for which we combine" ]
[ 6, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2019_BkzeUiRcY7", "iclr_2019_BkzeUiRcY7", "S1lrcUBMA7", "ryl9TkfK0m", "iclr_2019_BkzeUiRcY7", "HkgbGMxEnm", "SyggSiwZp7", "Hye-Lo29hm", "iclr_2019_BkzeUiRcY7" ]
iclr_2019_ByGuynAct7
The Deep Weight Prior
Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (DWP), that exploit generative models to encourage a specific structure of trained convolutional filters e.g., spatial correlations of weights. We define DWP in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that DWP improves the performance of Bayesian neural networks when training data are limited, and initialization of weights with samples from DWP accelerates training of conventional convolutional neural networks.
accepted-poster-papers
This paper proposes factorized prior distributions for CNN weights by using explicit and implicit parameterization for the prior. The paper suggest a few tractable methods to learn the prior and the model jointly. The paper, overall, is interesting. The reviewers have had some disagreement regarding the effectiveness of the method. The factorized prior may not be the most informative prior and using extra machinery to estimate it might deteriorates the performance. On the other hand, estimating a more informative prior might be difficult. It is extremely important to discuss this trade-off in the paper. I strongly recommend for the authors to discuss the pros and cons of using priors that are weakly informative vs strongly informative. The idea of using a hierarchical model has been around, e.g., see the paper on "Hierarchical variational models" and more recently "semi-implicit Variational Inference". Please include a related work on such existing work. Please discuss why your proposed method is better than these existing methods. Conditioned on the two discussions added to the paper, we can accept it.
train
[ "S1l_qeRt3X", "HkxI42NoCX", "HJxe-nQiR7", "HklzhvZs07", "SketciAt0X", "r1gW7uAYRQ", "H1e5OxjKpQ", "SkxoNliK6m", "rkl9WeiYa7", "HkebAXKun7", "rylsLCoBnX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers modeling convolutional neural network by a Bayes method. The prior for the weights is considered in which the weights from various layers, input and output channels are assumed to be independent. A varational method is considered to approximate the posterior distribution of the weights of CNN. It looks to me that the prior distribution is a fairly standard product which may not perfectly suitable for CNN. Also the validity of the proposed variational method needs further evaluation. Below I summarize my concerns more technically.\n\n1. CNN essentially has a tree structure, i.e., each layer can be viewed as the parent of the next layer. So the consecutive layers should have a sort of dependence. Also, the weights based on the same input channel should all inherit features of that channel. Based on these considerations, is it really reasonable to assume that the random weights are independent?\nI agree that independence assumption makes the model and computation easier, but the prior itself should reflect the possible dependence structure of the channels.\n\n2. The KL divergence might not be tractable and so the proposed variational method replaces it with an upper bound. This method highly depends on the assumption that the upper bound of the KL divergence is accurate. Otherwise it is hard to tell that the method really approximates the authentic variational method very well. It would be great if the accuracy of the upper bound can be further evaluated (theoretically and numerically).\n\n\n\n\n\n", "Sure, intra-layer dependencies can be done---as the deep weight prior does (well, intra-kernel dependencies in this case). Extending these priors across layers is the hard part, which this reference still doesn't do.", "The following paper introduces another prior, which allows dependence across the neurons. \n\n[1] Shengyang Sun, Changyou Chen, Lawrence Carin. \"Learning Structured Weight Uncertainty in Bayesian Neural Networks.\" Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1283-1292, 2017. \n\n\n\n\n", "I think that Reviewer #3 is being too critical of the prior's lack of structure, and I don't agree with the point that \"this is the nature of Bayesian deep network[s].\" The automatic relevance determination (ARD) prior [1] is the only structured one that I know of for Bayesian NNs, and its use is somewhat rare since performing inference for the scale hyperprior is challenging. Just about all papers on Bayesian NNs use fully factorized priors, and specifying inter-layer priors is a completely open problem, as far as I'm aware.\n\n[1] MacKay, David JC. \"Bayesian non-linear modeling for the prediction competition.\" Maximum Entropy and Bayesian Methods. 1996. 221-234.", "Thanks for your response. I still believe that a prior demonstrating dependence across layers and nodes is very important as this is the nature of Bayesian deep network. Also it is important to comment on some relating results on the upper bound in your approximation.", "Thanks for addressing my comments! My score of 8 remains unchanged. Good luck with the submission.", "We would like to thank you for the thoughtful review and questions about prior factorization and tightness of L^{aux}, we will address raised concerns below:\n\n1) Is it really reasonable to assume that the random weights are independent? \n\nBefore our work, priors for convolutional layers in Bayesian Deep Learning had a fully-factorized structure, namely, all weights were treated independently. We use more general factorization, now at least there are spatial correlations between the weights in a kernel, but not yet between kernels and between layers. The method allows us to use more flexible priors and can be potentially generalized to more complex dependencies. We discuss the factorization in (Section 6) and consider this direction for the future work. \n\n2) This method highly depends on the assumption that the upper bound of the KL divergence is accurate. \n\nIndeed, our method depends on the accuracy of the upper bound on KL-divergence or equivalently a gap between an intractable variational lower bound L and a proposed variational lower bound L^{aux}. As we show in (Appendix A, eq. 13), maximization of the L^{aux} minimizes the gap L - L^{aux} = E_q(w) [KL(r(z | w) || p(z | w))] by adjusting parameters of reversed model r(z | w), in other words, the gap reduces as reversed model r(z | w) gets more accurate. The bound is tight if and only if the auxiliary distribution coincides with the exact posterior distribution of the implicit weight model, i.e., r(z | w) = p(z | w). A huge piece of literature on ELBOs relies on this type of argument. \n\nAt the same time, there already exists a number of works on applications and analysis of tightness of variational bounds (for instance [1, 2]). Therefore, we did not address the underlined issue directly. One of the most straightforward approaches to providing more accurate variational bounds is to employ importance weighted bounds introduced in [3], which allow to trade off sample complexity and tightness of variational bounds. We have added the estimates of approximation gap using importance weighted bound to Appendix F, however, these figures should be treated with caution, as IWAE still gives us a lower bound on L.\n\n[1] On the Quantitative Analysis of Decoder-Based Generative Models, https://arxiv.org/abs/1611.04273\n[2] Tighter Variational Bounds are Not Necessarily Better, https://arxiv.org/abs/1802.04537\n[3] Importance Weighted Autoencoders, https://arxiv.org/abs/1509.00519", "We would like to thank you for the thoughtful review and useful points related to parametric priors and literature on implicit priors, we will address your questions below:\n\n1) I would have liked to have seen an experiment using a parametric prior (eg Gaussian) that shows what gains the implicit prior provides. Or is it simply a matter of memory efficiency? \n\nWe had also evaluated a multivariate Gaussian prior, that shares parameters across all kernels within the specific layer (Gaussian prior over kernels). This prior, however, leads to marginally worse performance in both the likelihood of the generative model and the final accuracy comparing to the VAE-based approximation. We will append these results in the Appendix.\n\n2) While it doesn’t say so explicitly, the paper seems to imply it is the first to use implicit priors. Some previous work that uses some form of implicit prior includes\n\nYou are right, a number of works have already used implicit priors. We clarified explicitly that the paper is not proposing to use implicit priors but propose a new inference technique that is compatible with Stochastic Gradient Variational Lower Bound. These papers are indeed relevant and we will definitely cite them.", "We would like to thank you for the thoughtful review with interesting suggestions, we will address your questions and concerns below:\n\n1) A lack of detail in Section 4\n\nWe had significantly improved Section 4. Subsection 4.1 has been rewritten and moved to Section 3, as it has more to do with the method rather than experiments. The description of experiments also has been improved. \n\n2) I did wonder whether there was any link between the suggested priors and the idea of modelling the current and related data sets used in constructing the prior jointly, with data set specific parameters given an exchangeable prior? … I just wondered if there is some conceptual link with the current method being an approximation of that approach in some sense. \n\nThe suggestion indeed seems to be a generalization of the current approach, that however may be closer to empirical Bayes, since the prior distribution p(W) can be adopted using the current dataset. In the paper, we do not consider any explicit model of a dataset e.g., dataset-embeddings, but it would be an interesting direction for research.\n\n3) In Section 4.1, it seems that for the trained networks on the source datasets, point estimates of the filter weights are treated as data for learning the variational autoencoder - is that correct? \n\nYes, this is correct.\n\n4) Could you model dataset heterogeneity here as well? \n\nIf we understand the question correctly, given the current model, the answer is no. Currently, we do not explicitly use kernels heterogeneity e.g., (a) the estimated variance of each training kernel, or (b) the problem from which each training kernel comes from, but using these additional factors could be an interesting topic of investigation.\n\n3) Presumably the p_l(z) density is N(0,I)?\n\nYes, this is correct, we described the form of the prior p(z) in VAE section, but it is worth to add it explicitly at the description of our model. \n\n3) In Section 4.2, you say that the number of filters is proportional to the scale parameter k and that you vary k. What scale parameter do you mean? \n\nWe scale a number of filters on every convolutional layer by k (in our experiments k can be equal to 1/16, 1/8, 1/4, 1/2 or 1). This allows us to track how the size of the model influence our method.", "Summary:\n\nThis paper proposes the ‘deep weight prior’: the idea is to elicit a prior on an auxiliary dataset and then use that prior over the CNN filters to jump start inference for a data set of interest. Both explicit and implicit priors are considered, with the latter having the benefit of increased flexibility but having the drawback of a lack of a parametric form to plug in to the ELBO. The authors address this last point by extending the ELBO appropriately. Experiments are performed testing the prior’s ability to capture trained filters (Figure 1), provide a good initialization (Figure 2), improve sample efficiency (Figure 3), improve training speed (Figure 4). \n\nPros:\n\nI like this paper: it is a intuitive idea, and the experiments explore exactly what one would hope to gain from the prior (i.e. better initialization, improved sample efficiency). I find the paper clearly written and to have a logical flow. Furthermore, I think eliciting priors---while so crucial in more traditional Bayesian modeling---has been mostly overlooked by the Bayesian ML community, and this paper clearly shows that there are gains to be had from a fairly straightforward procedure. \n\n\nCons:\n\nThe only potential issue with the paper is the use of the implicit prior, as it complicates variational inference, requiring the extension to the ELBO described in Section 3.2. As far as I can tell, all experiments use the implicit priors. I would have liked to have seen an experiment using a parametric prior (eg Gaussian) that shows what gains the implicit prior provides. Or is it simply a matter of memory efficiency? \n\n\nOther comments:\n\n-- Nice first sentence in the introduction! I like how it’s a general statement but immediately focuses the reader’s attention to the paper’s topic.\n\n-- While it doesn’t say so explicitly, the paper seems to imply it is the first to use implicit priors. Some previous work that uses some form of implicit prior includes:\n\nRuns a chain to refine the prior: Alex Lamb et al. \"GibbsNet: Iterative Adversarial Inference for Deep Graphical Models.\" Advances in Neural Information Processing Systems. 2017.\n\nOptimizes a NN implicit prior based on an invariance objective: Eric Nalisnick and Padhraic Smyth. \"Learning priors for invariance.\" International Conference on Artificial Intelligence and Statistics. 2018.\n\nDefines implicit priors over functions through samplers: Chao Ma, Yingzhen Li, and José Miguel Hernández-Lobato. \"Variational Implicit Processes.\" arXiv preprint arXiv:1806.02390 (2018).\n\n\nEvaluation: I recommend this paper for acceptance. It is a sensible idea with pointed experimental validation.\n", "This paper considers learning informative priors for convolutional neural network models based on fits to data sets from similar problem domains. For trained networks on related datasets the authors use autoencoders to obtain an expressive prior on the filter weights, with independence assumed between different layers. The resulting prior is generative and its density has no closed form expression, and a novel variational method for dealing with this is described. Some empirical comparisons of the deep weight prior with alternative priors is considered, as well as a comparison of deep weight samples for initialization with alternative initialization schemes. \n\nThis is an interesting paper. It is mostly clearly written, but there is a lack of detail in Section 4 that makes it hard for me, at least, to understand exactly what was done there. I think the originality level of the paper is high. The issue of informative priors in these complex models seems wide open and the authors provide an interesting approach both conceptually and computationally. I did wonder whether there was any link between the suggested priors and the idea of modelling the current and related data sets used in constructing the prior jointly, with data set specific parameters given an exchangeable prior? This would be a standard hierarchical modelling approach. Such an approach would not be computationally attractive, I just wondered if there is some conceptual link with the current method being an approximation of that approach in some sense. In Section 4.1, it seems that for the trained networks on the source datasets, point estimates of the filter weights are treated as data for learning the variational autoencoder - is that correct? Could you model dataset heterogeneity here as well? Presumably the p_l(z) density is N(0,I)? Details of the inference and reconstruction networks are sketchy. In Section 4.2, you say that the number of filters is proportional to the scale parameter k and that you vary k. What scale parameter do you mean? \n\n\n" ]
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_ByGuynAct7", "HJxe-nQiR7", "HklzhvZs07", "SketciAt0X", "H1e5OxjKpQ", "SkxoNliK6m", "S1l_qeRt3X", "HkebAXKun7", "rylsLCoBnX", "iclr_2019_ByGuynAct7", "iclr_2019_ByGuynAct7" ]
iclr_2019_ByME42AqK7
Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution
Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2)most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance.
accepted-poster-papers
The paper proposes an evolutionary architecture search method which uses weight inheritance through network morphism to avoid training candidate models from scratch. The method can optimise multiple objectives (e.g. accuracy and inference time), which is relevant for practical applications, and the results are promising and competitive with the state of the art. All reviewers are generally positive about the paper. Reviewers’ feedback on improving presentation and adding experiments with a larger number of objectives has been addressed in the new revision. I strongly encourage the authors to add experiments on the full ImageNet dataset (not just 64x64) and/or language modelling -- the two benchmarks widely used in neural architecture search field.
train
[ "BygMkWst37", "HkgjDn7V07", "ryxN-nQVRX", "B1x65oXNRX", "BkgSVo7VRm", "SJgMKr5h3X", "rJeAPV55hQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n- Summary\nThis paper proposes a multi-objective evolutionary algorithm for the neural architecture search. Specifically, this paper employs a Lamarckian inheritance mechanism based on network morphism operations for speeding up the architecture search. The proposed method is evaluated on CIFAR-10 and ImageNet (64*64) datasets and compared with recent neural architecture search methods. In this paper, the proposed method aims at solving the multi-objective problem: validation error rate as a first objective and the number of parameters in a network as a second objective.\n\n- Pros\n - The proposed method does not require to be initialized with well-performing architectures.\n - This paper proposes the approximate network morphisms to reduce the capacity of a network (e.g., removing a layer), which is reasonable property to control the size of a network for multi-objective problems.\n\n- Cons\n - Judging from Table 1, the proposed method does not seem to provide a large contribution. For example, while the proposed method introduced the regularization about the number of parameters to the optimization, NASNet V2 and ENAS outperform the proposed method in terms of the accuracy and the number of parameters.\n - It would be better to provide the details of the procedure of the proposed method (e.g., Algorithm 1 and each processing of Algorithm 1) in the paper, not in the Appendix.\n - In the case of the search space II, how many GPU days does the proposed method require? \n - About line 10 in Algorithm 1, how does the proposed method update the population P? Please elaborate on this procedure.\n", "Dear reviewers,\n\nthanks again for your valuable feedback. We just updated our paper. We mainly made two modifications, based on your feedback:\n1) We reorganized the paper according to your suggestions; some parts of the main paper were moved to the appendix, some parts of the appendix were moved to the main paper.\n2)As you were asking whether LEMONADE is applicable to more than 2 objectives, we ran an experiment with 5 objectives, namely 1) performance on Cifar 10 (expensive objective), 2) performance on Cifar 100 (expensive) , 3) number of parameters (cheap), 4) number of multiply-add operations (cheap), 5) inference time (cheap). We refer to Appendix 3, “LEMONADE with 5 objectives”, for details and results, but in a nutshell the results are very positive and qualitatively resemble those for two objectives. While we put this experiment into the appendix for now to not change the main paper too much compared to the submitted version, if you agree we would also be very happy to include this experiment in the main paper.\n\nWe hope the updated version and our answers to your reviews have cleared out all major concerns and we kindly ask you to update your rating if we clarified your concerns.\n\n", "Dear AnonReviewer2,\nthank you for your constructive feedback. Below we address your concerns and questions.\n\n“Judging from Table 1, the proposed method does not seem to provide a large contribution. For example, while the proposed method introduced the regularization about the number of parameters to the optimization, NASNet V2 and ENAS outperform the proposed method in terms of the accuracy and the number of parameters.“\n→ The authors of NASNet only provide results for two regimes of parameters (3.3M and 27M) as they do not perform multi-objective optimization but rather just vary two parameters for building NASNet models (number of cells stacked, number of filters). Their method might be optimized to yield good results in these regimes and, admittedly, LEMONADE does not outperform NASNet for models with ~4M parameters. However, from Figure 3 and Table 2 one can see that only varying these two parameters for NASNet models is not necessarily sufficient to generate good models across all parameter regimes. E.g., LEMONADE clearly outperforms NASNet for very small models (50k params, 200k params - Table 2). We also refer to Appendix 3 (“LEMONADE with 5 objectives”), Figure 6, in the updated version of our paper, where one can see that while NASNet has quite strong performance in terms of error, number of parameters and number of multiply-add operations, it performs poorly in terms of inference time. Hence, there is a benefit in doing multi-objective optimization if one is actually interested in multiple objectives and diverse models rather than a single model. This is the main contribution of our paper and different to, e.g., the NASNet paper. The same likely also applies for ENAS (as they use the same search space and conduct very similar experiments). We also would like to highlight two things: 1) NASNet requires 40x computational resources than LEMONADE, so even if NASNet performs better for ~4M parameter models, LEMONADE achieves competitive performance in significantly less time. 2) Table 1 shows results for models trained with different training pipelines and hyperparameters, and hence it is hard to say architecture X performs better than architecture Y since differences could simply be due to e.g. different learning rates, batch sizes, etc. In contrast, all other results in the paper (e.g., Figure 3 and Table 2) provide comparisons with exactly the same training pipeline and hyperparameters. .\n\n“It would be better to provide the details of the procedure of the proposed method (e.g., Algorithm 1 and each processing of Algorithm 1) in the paper, not in the Appendix. “\n-> Thanks, we agree; we re-organized our paper accordingly.\n\n\n“- In the case of the search space II, how many GPU days does the proposed method require? \n-> We also ran this experiments for 7*8 GPU days, however the method converged after roughly 3*8 GPU days (meaning that there were no significant differences afterwards).\n\n“About line 10 in Algorithm 1, how does the proposed method update the population P? Please elaborate on this procedure.”\n-> The population is updated to be all non-dominated points from the current population and the generated children, i.e. the Pareto frontier based on all current models. We clarified this in Algorithm 1. Thanks for pointing us towards this.\n\n\nWe hope this clarifies your questions. Thanks again for the review!\n", "Dear AnonReviewer1,\nthank you for your positive and constructive feedback. Below we address your concerns and questions.\n\n“What value of $\\epsilon$ in Eqn (1) is used? [...] how can they guarantee the \\epsilon-ANM condition?”\n→ Indeed, one can not guarantee the \\epsilon-ANM condition for an arbitrary epsilon. However, in our application one does not need to explicitly select $\\epsilon$ at all. We simply apply an approximate network morphism operator. Case 1, epsilon is small: the output is a network that is “smaller” than its parent and has a similar error, so the children will likely be non-dominated and it will be part of the pareto front in the next generation. Case 2, epsilon is large (hence likely also the error): the children will likely be dominated by some other network and it will be discarded when the Pareto front is updated. Thus, in both cases, the specific epsilon doesn’t matter. The step of LEMONADE, where the Pareto front is updated, will automatically decide whether the morphing was successful or not based on the (non-)domination criterion. We updated (shortened) the section on approximate network morphism to not put a too strong emphasis on this. Hopefully it is now less confusing.\n\n\n“[...] the method as currently presented does not show possible generalization beyond these two objectives, which is a weakness of the paper.”\n-> We respectfully disagree. In principle, the proposed method is - as is - applicable to arbitrary objectives and arbitrary many objectives. It is neither restricted to these specific objectives nor to n=2 objectives. To demonstrate this, we carried out a new experiment with exactly the same method on 5 objectives (2 expensive ones, 3 cheap ones). We refer to the additional experiment, Appendix 3 (“LEMONADE with 5 objectives”), in the updated version of our paper.\n\n“How would LEMONADE handle situations when there are more than one $f_{cheap}$, especially when different $f_{cheap}$ may have different value ranges? Eqn (8) and Eqn (9) does not seem to handle these cases.”\n-> Both equations are not restricted to 1D inputs. (Kernel) density estimators can, in general, be applied to arbitrary dimensions and most packages allow multi-dimensional inputs by default (e.g. KDE in scipy or scikit-learn). Of course, density estimation becomes problematic with increasing number of dimensions, but we believe 4-6 objectives is a realistic dimensionality for NAS applications, and scaling to significantly more objectives will typically not be necessary. \n\nNote that the output of a KDE is always 1D, independent of the input. Also, most packages provide methods for, e.g., automatic bandwidth computation (per input dimension) to handle different value ranges. To make the input and output spaces in equations 8,9 (equations 1,2 in the updated version) clearer, we provide them in detail here:\nf_cheap: <some neural network space> → R^n, where n is the number of cheap objectives\np_kde: R^n → R\np_p: <some neural network space> → R\n\n“Same question with $f_{exp}$.”\n→ The expensive objectives are only involved in the last two steps of LEMONADE (evaluate $f_{exp}$ on the subset of children, update the Pareto frontier). These steps can be applied to more than one expensive objective. E.g. instead of training the children only on CIFAR-10, we can also train them on some other data set as well (and in our new experiment with 5 objectives we indeed also train them on CIFAR-100 as a second expensive objective). Of course, the runtime of the method will increase linearly in the number of expensive objectives. \n\nSo, to summarize regarding having only 2 objectives:\n1) Our method can in principle handle more than 2 objectives (both cheap and expensive), there is no general restriction to n=2 objectives.\n2) From an implementation point of view, common packages for computing density estimators automatically deal with multi-dimensional inputs and different ranges, hence LEMONADE can be run - as is - with multi-dimensional objectives without any further user interaction or modifications.\n3) To confirm these statements, we ran an additional experiment with 5 objectives - 2 expensive ones (performances on Cifar-10, performance on Cifar-100) and 3 cheap ones (number of parameters, number of multiply-add operations, inference time). We refer to Appendix 3, “LEMONADE with 5 objectives”, in the updated version of our paper for details and results. \n\n\nWe hope this clarifies your questions. Thanks again for the review!\n\n", "Dear AnonReviewer3,\nthank you for your positive review and constructive feedback!\n\nWe agree that the structure of the paper was not optimal and reorganized it along the lines you suggested (thanks for the suggestion!). Below we address specific questions.\n\n“I am a bit unclear about how comparisons are made to other methods that do not optimize for small numbers of parameters? Do you compare against the lowest error network found by LEMONADE? The closest match in # of parameters?”\n-> The latter: we compared with the models with the closest match in # of parameters.\n\n“Why is the second objective log(#params) instead of just #params when the introduction mentions explicitly that tuning the scales between different objectives is not needed in LEMONADE?” \n-> We stated that defining a trade-off between objectives is not necessary (in case you are referring to this statement), which would, e.g., be necessary when one would scalarize objectives by using a weighted sum. Rescaling an objective, however, is different as it is independent from other objectives: it only depends on that specific objective and which scale is important to the user and the application. For the number of parameters, the log scale is natural to cover a large range of sizes: think of a plot of size vs. performance; in order to see anything for small sizes one would typically put the size on a log scale (and we indeed did, see, e.g., Figures 3 and 4). Therefore, it is most natural to also put the number of parameters on a log scale for LEMONADE.\n\n“It seems like LEMONADE would scale poorly to more than 2 objectives, since it effectively requires approximating an #objectives-1 dimensional surface with the population of parents. How could scaling be handled?”\n-> We think having 4-6 objectives is a realistic dimensionality for NAS applications, and scaling to significantly more objectives (which would indeed be problematic for our method, but also for multi-objective optimization in general) is typically not necessary. In response to this question, to demonstrate this, wee conducted a new experiment with 5 objectives (performance on Cifar 10, performance on Cifar 100, number of parameters, number of multiply-add operations, inference time) to show that LEMONADE can handle these realistic scenarios natively. We refer to the updated version of our paper for the results (Appendix 3,“LEMONADE with 5 objectives”), but in a nutshell the results are very positive and qualitatively resemble those for two objectives.\nWhile we put this experiment into the appendix for now to not change the main paper too much compared to the submitted version, if the reviewers agree we would also be very happy to include this experiment in the main paper.\n\nWe hope this clarifies your questions. Thanks again for the review!\n", "This paper proposes LEMONADE, a random search procedure for neural network architectures (specifically neural networks, not general hyperparameter optimization) that handles multiple objectives. Notably, this method is significantly more efficient more efficient than previous works on neural architecture search.\n\nThe emphasis in this paper is very strange. It devotes a lot of space to things that are not important, while glossing over the details of its own core contribution. For example, Section 3 spends nearly a full page building up to a definition of an epsilon-approximate network morphism, but this definition is never used. I don't feel like my understanding of the paper would have suffered if all Section 3 had been replaced by its final paragraph. Meanwhile the actual method used in the paper is hidden in Appendices A.1.1-A.2. Some of the experiments (eg. comparisons involving ShakeShake and ScheduledDropPath, Section 5.2) could also be moved to the appendix in order to make room for a description of LEMONADE in the main paper.\n\nThat said, those complaints are just about presentation and not about the method, which seems quite good once you take the time to dig it out of the appendix.\n\nI am a bit unclear about how comparisons are made to other methods that do not optimize for small numbers of parameters? Do you compare against the lowest error network found by LEMONADE? The closest match in # of parameters?\n\nWhy is the second objective log(#params) instead of just #params when the introduction mentions explicitly that tuning the scales between different objectives is not needed in LEMONADE?\n\nIt seems like LEMONADE would scale poorly to more than 2 objectives, since it effectively requires approximating an #objectves-1 dimensional surface with the population of parents. How could scaling be handled?\n", "Summary:\nThe paper proposes LEMONADE, an evolutionary-based algorithm the searches for neural network architectures under multiple constraints. I will say it first that experiments in the paper only actually address to constraints, namely: log(#params) and (accuracy on CIFAR-10), and the method as currently presented does not show possible generalization beyond these two objectives, which is a weakness of the paper.\n\nAnyhow, for the sake of summary, let’s say the method can actually address multiple, i.e. more than 2, objectives. The method works as follows.\n\n1. Start with an architecture.\n\n2. Apply network morphisms, i.e. operators that change a network’s architecture but also select some weights that do not strongly alter the function that the network represents. Which operations to apply are sampled according to log(#params). Details are in the paper.\n\n3. From those sampled networks, the good ones are kept, and the evolutionary process is repeated.\n\nThe authors propose to use operations such as “Net2WiderNet” and “Net2DeeperNet” from Chen et al (2015), which enlarge the network but also choose a set of appropriate weights that do not alter the function represented by the network. The authors also propose operations that reduce the network’s size, whilst only slightly change the function that the network represented.\n\nExperiments in the paper show that LEMONADE finds architecture that are Pareto-optimal compared to existing model. While this seems like a big claim, in the context of this paper, this claim means that the networks found by LEMONADE are not both slower and more wrong than existing networks, hand-crafted or automatically designed.\n\nStrengths:\n1. The method solves a real and important problem: efficiently search for neural networks that satisfy multiple properties.\n\n2. Pareto optimality is a good indicator of whether a proposed algorithm works on this domain, and the experiments in the paper demonstrate that this is the case.\n\nWeaknesses:\n1. How would LEMONADE handle situations when there are more than one $f_{cheap}$, especially when different $f_{cheap}$ may have different value ranges? Eqn (8) and Eqn (9) does not seem to handle these cases.\n\n2. Same question with $f_{exp}$. In the paper the only $f_{exp}$ refers to the networks’ accuracy on CIFAR-10. What happens if there are multiple objectives, such as (accuracy on CIFAR-10, accuracy on ImageNet) or (accuracy on CIFAR-10, accuracy on Flowers, image segmentation on VOC), etc.\n\nI thus think the “Multi-Objective” is a bit overclaimed, and I strongly recommend that the authors adjust their claim to be more specific to what their method is doing.\n\n3. What value of $\\epsilon$ in Eqn (1) is used? Frankly, I think that if the authors train their newly generated children networks using some gradient descent methods (SGD, Momentum, Adam, etc.), then how can they guarantee the \\epsilon-ANM condition? Can you clarify and/or change the presentation regarding to this part?\n" ]
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_ByME42AqK7", "iclr_2019_ByME42AqK7", "BygMkWst37", "rJeAPV55hQ", "SJgMKr5h3X", "iclr_2019_ByME42AqK7", "iclr_2019_ByME42AqK7" ]
iclr_2019_ByMHvs0cFQ
Quaternion Recurrent Neural Networks
Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images recognition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information.
accepted-poster-papers
The authors derive and experiment with quaternion-based recurrent neural networks, and demonstrate their effectiveness on speech recognition tasks (TIMIT and WSJ), where the authors demonstrate that the proposed models can achieve the same accuracy with fewer parameters than conventional models. The reviewers were unanimous in recommending that the paper be accepted.
train
[ "SkxejkS5hX", "HJgC3kCNh7", "SkxSXlFmT7", "ryxPHgY7p7", "H1evr1F76m", "HJlaJJFQTm", "SJg4KAOmTQ", "B1ebqu4y6X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Quality: sufficient though there are issues. Work done in automatic speech recognition on numerous variants of recurrent models, such as interleaved TDNN and LSTM (Peddinti 2017), is completely ignored [addressed in the revision]. The description of derivatives needs to mention the linear relationship between input features and derivatives (see trajectory HMMs by Zen and Tokuda) [addressed in the revision]. TIMIT is a very simple task [addressed by adding WSJ experiments]. Derivations in the appendices could be connected better [addressed in the revision]. \n \nClarity: sufficient. It would be good to see some discussion of 1) split activations and other possible options [short comment added in the revision] if any 2) expressions of derivatives and their connection to standard RNN derivatives [short comment added in the revision], 3) computational complexity [addressed in the revision]. \n\nOriginality: sufficient. This paper describes the extension of quaternion feed-forward neural networks to recurrent neural networks and a parameter initialisation method in the quaternial domain.\n\nSignificance: sufficient. \n\nPros: Audience interested in quaternial neural networks would benefit from this publication. Experimental results even if limited suggest that quaternial representation may offer a significant reduction in the number of model parameters at no loss in performance. \n\nCons: The choice of derivatives to yield quaternions as there are other more interesting views to contemplate both in speech and other fields. A simple task makes it hard to judge how the quaternion extension would scale. \n\nOther:\n\nThe format of references, the use of a number in parentheses, is unusual and distractive. [fixed in the revision] \nPlease at least name all the terms used in the main paper body even if they are defined later in the appendix (e.g. h_{t}^{*} in equation 10). [fixed in the revision]\nDo both W_{hh} and b_{h} contain the same \\delta_{hh}^{t} term in their update equation 11? [fixed in the revision]\nPage 7 by mistake mentions 18.2% which cannot be found in the Table 1. [fixed in the revision]\nPage 12 \"is equals to\" [remains in the revision]\n", "After the discussion with authors, I am happy to recommend acceptance.\n————————————————————\n\n1.\tIn “Consequently, for each input vector of size N, output vector of size M, dimensions are split into four parts: the first one equals to r, the second is xi, the third one equals to yj, and the last one to zk to compose a quaternion Q = r1 + xi + yj + zk”, are you splitting dimension M or M\\times N? And if you split M \\times N (I believe that’s what you are doing), in which order you are splitting (row major right?) Please explain.\n2.\tI did not understand why authors didn’t go in the negative direction of the gradient in Eq. (10-11)?\n3.\tIn section 3.4, authors mentioned “Moreover, an hyper-complex parameter cannot be simply initialized randomly and component-wise, due to the interactions between components.” which I strongly agree. But in Eq. (7) and (9) why the update rules and activation function are applied component wise?\n4.\tI really like the elegance in the parameter initialization. Couple of minor things here: (1) It’s better to mention in Eq. (16) why E(|W|) is 0 because of symmetry. (2) Reference should be 6.1 instead of 5.1.\n5.\tAnother reasonable baseline will be using a complex network like (https://openreview.net/forum?id=H1T2hmZAb) and use the first two terms in Eq. (19) for representation. This will also possibly justify the usefulness of using higher order partials. \n6.\tThe authors mentioned multiple times about the achieved state-of-the-art results without giving any citation. As a reader not well versed in the acoustic domain, it will be nice to see some references to cross-validate the claim made.\n\n\n\nGeneral Comments:\n1.\tI understand the necessity of defining RNN/ LSTM model in the space of quaternions. But unit quaternions can be identified with other spaces where convolution is defined recently, e.g., with S^3 (https://arxiv.org/abs/1809.06211). I can see that this paper is contemporary, but at least can authors comment on the applicability of this general method in their case? Given that in NIPS’18 the following paper talked about RNN model on non-Euclidean spaces (https://arxiv.org/pdf/1805.11204.pdf), one can extend these ideas to develop an RNN model in the space of quaternions. Authors should look into it rigorously as future directions? But at least please comments on the applicability.\n2.\tThe experimental results section is somewhat weak, the overall claim of using fewer parameters and achieving comparable results is only validated on TIMIT data. More experimentation is necessary. \n3.\tIn terms of technical novelty, though quaternion algebra is well-known, I like the parameter initialization algorithm. I can see the merit of this in ML/ vision community. \n\nPros: \n1. Nice well grounded methodological development on well-known algebra. (simple but elegant, so that's good).\n2. Nicely written and all the maths check out (that's good).\n3. Experimental result on TIMIT dataset shows usefulness in terms of using fewer parameters (but still can achieve SOA results).\n\nCons:\n1. See my comments above. I expect the authors to rebut/ address the aforementioned comments. Overall though simple but nice (and necessary) development of RNN/ LSTM framework in the space of quaternions. \n2. Lacks extensive experimental validation.\n\nMy reason for my rating is mainly because of (1) lack of experimental validation. (2) being aware of the recent development of general RNN model on non-Euclidean spaces, I want some comments in this direction (see detailed comment and reference above).\n", "——— General comments\n\n1. This is a critical point, indeed. The question « Complex and quaternion valued networks are ok, but what if we want to go in higher dimensions ? » is a common question. We are aware that many works exist in higher dimensions (Octonions, sedenions ..) or even on « generic » models that could apply to any dimensionality (like Clifford algebra based neural networks, or ManifoldNets). Therefore, we acknowledge the fact that it should be possible to reduce the dimension of such different spaces to 4, but then, we would end up with neural networks that will behave as quaternion-valued neural networks. Quaternion-valued neural networks are a special case of such high-dimensional algebras, and are thus suitable to perfectly solve specific problematics that could be very useful for many domain areas, such as image processing, human pose estimation, 3D transformations,…,. It is therefore important to first clearly define a quaternion-valued neural network as a specific neural network using a particular algebra. However, it is clear that there is a need for higher dimensional neural networks (such as ManifoldNets), and there is plenty of rooms for investigations. \n\n2. We agree that TIMIT is too small. Therefore, and as described in the answer to questions of reviewer 2, we added results on a larger speech recognition task (Wall Street Journal) in the supplementary materials. Reported results confirm those observed on the TIMIT task with fewer parameters and slightly lower Word Error Rates. \n\n3. We thanks reviewer 3 for this encouraging statement.\n\nWe truly hope that we answered all your questions and remarks, and we are still open to any discussion on this work. ", "We would like to first thank reviewer 3 for the detailed and useful feedback. We start by addressing each one of the initial point raised:\n\n1. Let us take the example of an input vector X of size N=256 and an output vector O of size M=512. During computations, both X and O are one-dimensional real-valued vectors. Numbers contained in X[0,…,63] are real components, while X[64,…,127] belong to the component i. Therefore the first quaternion is X[0] + X[64]i + X[128]j + X[192]k, and the same considerations can be done for the output vector O. At the end, we have N/4 input quaternions and M/4 output quaternions. \n\n2. The update phase of the NN parameters with respect to the gradient direction is actually depending on the task. We corrected it to go in the negative direction of the gradient, we thank reviewer 2 for suggesting this correction. \n\n3.The authors agree with the fact that the split activation functions does not seem to perfectly suit quaternion networks. Therefore we added a statement in the paper to motivate the use of split activation functions. Nonetheless, these quaternion activation functions have been found to be more stable (purely quaternion functions have singularities), and easier to compute, making them interesting for QRNNs. We plan in a near future to investigate QNNs that will use quaternions from the input to the output (pure quaternion activations, full rotations), but we believe that these networks might be harder to train due to singularities deriving from the use of quaternion algebra. Furthemore, the BPTT for quaternions is defined based on the initial work of back propagation in the quaternial domain proposed by P. Arena. In the latter, the loss (Eq. 9) has to be calculated with respect to each component of the quaternion. Indeed, we have to evaluate how much each component of a given quaternion parameter affects the whole loss. Then by applying the chain rule, we end up with a component-wise product in Eq. 13 due to the split activations, which is simpler and way less computationally intensive than calculating the derivative of a quaternion-valued function. \n\n4. Fixed.\n\n5. We thought a lot about baselines during the experiments. The main issue is that it is not possible to compare complex-valued NN (CVNNs) to QNNs in a fair setting. Indeed, in the case proposed by reviewer 2, CVNNs will clearly have less information and will give worst results. Then we could use CVNNs with magnitude and phase directly from the signal, but the input space would be different compared to QNN, and we won’t have comparable results. Many papers use well engineered features, more-complex structures (attention mechanisms, gates,..,), or even training regularization (Batch-normalization), and it won’t be fair to compare our « vanilla » QRNN and QLSTM to such models. For these reasons, we have decided to add real-valued RNN and LSTM, that are exactly the same than quaternion-valued ones, to obtain fair comparisons. However, it is clear that it could be interesting to build and investigate a complex and state-of-the art quaternion-valued model, but we have to first introduce the basics of QRNN-QLSTM based models.\n\n6. The authors agree with this remark. As suggested by reviewer 3, we added more citations and written results of the literature in the paper (in term of PER) to help the reader to better compare and evaluate the observed results. ", "——— Pros and Cons\n\nAs stated above, we added WSJ experiments to validate the results observed with the small TIMIT dataset. Reviewer 1’s statement about actual quaternion acoustic feature is definitely true, and we propose in the conclusion to investigate novel multi-view features that could be better adapted. \n\nWe truly hope that we answered all the remarks and questions of reviewer 1, and we are available for any further discussion. \n", "We would like to first thank reviewer 1 for the useful feedback. In the following, we address typos and general comments.\n\n——— Typos and general comments\n\nThe format of references has been modified to match the standard of the ICLR format (Name et al, year).\n\nThe authors agree with the fact that the notation have to be better explained to make the paper more clear for the reader. Therefore, we have added a sentence to clarify h_{t}^{*}. \n\nWe would like to thank the reviewer 1 to highlight that b_{h} was given the wrong delta during the backpropagation. The right equation for b_{h} has been added to the paper as well as in the supplementary material. \n\nOthers typos have been corrected.\n\n——— Quality\n\nAs suggested by reviewer 1, we added the missing references of prior works on ASR systems in the introduction, and the linear relation between input features and derivatives.\n\nThe authors agree with fact that TIMIT is a very simple task, but this framework allows us to evaluate the relevance of RNNs in terms of performance and the number of parameters required. Therefore, in the revised version of the paper, we have added experiments on the Wall Street Journal (WSJ) speech recognition task (in the supplementary materials) based on both the 14 and the 81 hours training data-sets. Experiments are conducted with the same configurations than the ones from the models that have obtained the best results observed during the experiments on speech recognition on the TIMIT data-set. As expected, QLSTMs scale well (such as for real-valued LSTM) to larger data-sets, and the performances observed during these experiments support the fact that QLSTMs perform better (w.r.t WER), and with fewer parameters.. \n\n——— Clarity\n\nQuaternion NNs suffer from the fact of being still little employed. Therefore, we understand that many concepts such as the « split activation functions » raise legitimate questions. We added some words on the paper to motivate the use of split activation functions. Nevertheless, and as we have mentioned throughout new citations, the split activation functions have already been well investigated, and we could only paraphrase what original authors demonstrated. However, this point raised by Reviewer 1 have to be investigated during a dedicated work, to allow the reader to easily follow the study that will compare different activation functions as well as different function methods (split or not). \n\nThe authors agree with reviewer 1 and, therefore, we have added a paragraph to clarify the decomposition of derivatives (Below Eq. 9) in the paper and in the appendix (Below Eq. 40). Indeed, the derivatives of different elements/views of a same feature with a real valued BPTT process, do not allow the RNN based model to learn how to compute the whole dynamic of the error (dE/dW), due to the fact that the dynamic of each element composing the features, are not merged/mixed to compose the derivative of the whole error observed. This process of merging partial derivatives from each elements (r, i, j, k for quaternions and 4 different input features for real numbers) is managed by the weight matrices and hidden states in the context of real-valued RNN based models during the learning process. The author agree that these intuitions have to be supported by solid experiments and model analyses. Therefore, we also plan to investigate the internal dynamic (through partial derivatives that contribute to the total dynamic) of QRNNs compared to real-valued RNNs ones to better understand the benefits from the QBPTT (in addition to better results and less parameters). \n\nComputational complexity is also a very good point, but hard to fairly answer in the current state of QNNs. Nonetheless, and as requested by reviewer 1, we added a paragraph to the paper (Appendix 6.1.2)), to acknowledge the fact that computational complexity can be a problem. From a pure computational complexity perspective: QLSTM = LSTM = O(n^2). Indeed, due to the real-valued representation of quaternions, QLSTMs perform the same matrices operations, but with 4 times bigger matrices. From a computational time perspective, a simple forward propagation between two quaternion neurons involves 28 computations. Therefore QNNs are slower to train (2 to 3 times slower, depending on the model) due to this much higher number operations. Nonetheless, we also know that such computations are matrices products. We believe that a proper GPU engineering (cuDNN kernel) of the Hamilton product could drastically reduce the computation time by doing these 28 computations in parallel, implying a more efficient usage of the available resources. Furthermore, with a proper cuDNN kernel, one will obtain a better memory / computation rate. Indeed, QNNs are doing more computations, but with fewer parameters. This point will be detailed in a proper section in the appendix of the final version of the paper.\n", "The authors thank the reviewer for the positive and constructive feedback. We appreciate that the reviewer finds that our paper on QRNN is clearly explained, viable and thoroughly evaluated.\n\nIn this work, we decided to show that even with traditional acoustic features (Mel-filter-bank + derivatives), we could motivate and introduce quaternion-valued recurrent neural networks. Nonetheless, as underlined by reviewer 2, a future work will be to investigate proper quaternion acoustic features (or even other domains features). Indeed, current features are mostly engineered for a real-valued representation, and there is plenty of rooms to explore quaternion-valued features (such as complex-valued features in the case of speech recognition with complex neural networks, or quaternion Fourier Transform). \n", "The paper takes a good step toward developing more structured representations by exploring the use of quaternions in recurrent neural networks. The idea is motivated by the observation that in many cases there are local relationships among elements of a vector that should be explicitly represented. This is also the idea behind capsules - to have each \"unit\" output a vector of parameters to be operated upon rather than a single number. Here the authors show that by incorporating quaternions into the representations used by RNNs or LSTMs, one achieves better performance at speech recognition tasks using fewer parameters.\n\nThe quaternionic representation of the spectrogram chosen here seems a bit arbitrary. Why are these the attributes to be packaged together? its not obvious. Shouldn't this be learned?\n\n" ]
[ 7, 7, -1, -1, -1, -1, -1, 8 ]
[ 5, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_ByMHvs0cFQ", "iclr_2019_ByMHvs0cFQ", "HJgC3kCNh7", "HJgC3kCNh7", "HJlaJJFQTm", "SkxejkS5hX", "B1ebqu4y6X", "iclr_2019_ByMHvs0cFQ" ]
iclr_2019_ByMVTsR5KQ
Adversarial Audio Synthesis
Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising.
accepted-poster-papers
This paper proposes a GAN model to synthesize raw-waveform audio by adapting the popular DC-GAN architecture to handle audio signals. Experimental results are reported on several datasets, including speech and instruments. Unfortunately this paper received two low-quality reviews, with little signal. The only substantial review was mildly positive, highlighting the clarity, accessibility and reproducibility of the work, and expressing concerns about the relative lack of novelty. The AC shares this assessment. The paper claims to be the first successful GAN application operating directly on wave-forms. Whereas this is certainly an important contribution, it is less clear to the AC whether this contribution belongs to a venue such as ICLR, as opposed to ICASSP or Ismir. This is a borderline paper, and the decision is ultimately relative to other submissions with similar scores. In this context, given the mainstream popularity of GANs for image modeling, the AC feels this paper can help spark significant further research in adversarial training for audio modeling, and therefore recommends acceptance. I also encourage the authors to address the issues raised by R1.
train
[ "r1lL9-zDTX", "S1lL5vpRhm", "ByxwkkouTX", "r1x7zyjOam", "r1gFfA7GaX", "SJeGDaQMaQ", "Skgki1VfT7", "BJxnO07zTQ", "HkeNkp5xTQ", "HJgpLMhn37" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for your clarifications. Here are my thoughts on modifying the score.\n\n“““ the algorithmic contribution is limited. ”””\n\nIf as said in the response, the concrete methodological contributions are phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). At first reading of the paper, these contributions are not clear. These statements are not presented until Section 3.3 and Section 5 (EXPERIMENTAL PROTOCOL). In the Abstract and Introduction, it is said that the barrier to success application of GANs to audio generation is the non-invertible spectral representation. \n\nIf WaveGAN is what the paper introduces to overcome the non-invertible issue, it is confusing to see that SpecGAN outperforms WaveGAN by a large margin in Inception score. And I think, 58% accuracy for WaveGAN vs 66% for SpecGAN cannot be said to be similar. The non-invertible issue is not a issue.\n\nPhase shuffle increases Inception scores substantially (4.12->4.67) in WaveGAN, but deteriorate Inception score in SpecGAN. And there is no discussion about this.\n\nIt is appreciated that the paper presents a nice effort to apply GANs to audio generation. But the presentation should be improved to make clearer the concrete methodological contributions and to present more consistent results.\n\n“““ Qualitative ratings are poor. ”””\n\nAround 60% accuracy for generated data is not a strong evidence that WaveGAN/SpecGAN as presented in this paper is promising. Thinking about generating digit images 0-9 by training GANs in MNIST. The labeling accuracy for generated images would be much higher than 60%. GAN based audio synthesis is interesting and should be promising. But the results shown in this paper does not fully validate this.\n\n\n=========== comments after reading response ===========\n\nThe reviewer would like to thank the authors for their response, which clarifies some unclear issues. However, the response does not address my main concern about the algorithmic contribution of the proposed method. \n\n> While we outlined additional methodological contributions (e.g. phase shuffle) in response to your initial review, our *primary* contribution is still a GAN that operates on raw audio waveforms. Before this paper, the ability of GANs to generate one dimensional time series data had not been demonstrated.\n\nThis seems to be a overstate. Pascual et al. (2017) (SEGAN) has already shown the ability of GANs to conditionally generate one dimensional time series data. Instead of simply saying that \"Pascual et al. (2017) apply GANs to raw audio speech enhancement.\", it would be better to provide more relevant comparisons, inform the readers that the difference between Pascual et al. (2017) and this paper is conditional generation vs unconditional generation, and clarifies the difficulty in unconditional generation.\n\nThe paper consists of interesting efforts and contributions. I would like to suggest the authors to move the contributions of phase shuffle (Section 3.3) and the learned post processing filters (Appendix B) to the foreground. This presentation problem make me hold the scoring.", "This paper proposes WaveGAN for unsupervised synthesis of raw-wave-form audio and SpecGAN that based on spectrogram. Experimental results look promising.\n\nI still believe the goal should be developing a text-to-speech synthesizer, at least one aspect.", "Thank you for elaborating. We still do not understand what specifically caused you to *change* your initial score of 6 to a 5. Respectfully, these criticisms appear to be post-hoc justification for your updated score. Nevertheless, we will address your concerns below, and have updated the paper with minor revisions based on your feedback:\n\n** Clarifying contributions **\n\n“““If as said in the response, the concrete methodological contributions are phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). At first reading of the paper, these contributions are not clear. These statements are not presented until Section 3.3 and Section 5 (EXPERIMENTAL PROTOCOL). In the Abstract and Introduction, it is said that the barrier to success application of GANs to audio generation is the non-invertible spectral representation. ”””\n\nWhile we outlined additional methodological contributions (e.g. phase shuffle) in response to your initial review, our *primary* contribution is still a GAN that operates on raw audio waveforms. Before this paper, the ability of GANs to generate one dimensional time series data had not been demonstrated.\n\n** WaveGAN vs SpecGAN **\n\n“““The non-invertible issue is not a issue.”””\n\nSimply put, the non-invertibility of SpecGAN *is* an issue. If you naively apply image GANs to audio generation (by operating on spectrograms i.e. SpecGAN), the non-invertibility of the spectrograms is a major barrier to downstream usability because the resultant audio quality is atrocious (see links below). While humans are able to label digits generated by SpecGAN with higher accuracy than those generated by WaveGAN, the human-assessed subjective sound quality of SpecGAN is worse (and a simple listening test confirms this).\n\nBy operating directly on waveforms, our WaveGAN method achieves higher audio quality, is simpler to implement, and is a first for generative modeling of audio using GANs.\n\nWaveGAN (recognizable and better audio quality): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/wavegan_sc09.wav\nSpecGAN with *approximate* inversion (recognizable but poor audio quality): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/specgan_sc09.wav\n\n** Presentation of qualitative ratings **\n\n“““Around 60% accuracy for generated data is not a strong evidence that WaveGAN/SpecGAN as presented in this paper is promising. Thinking about generating digit images 0-9 by training GANs in MNIST. The labeling accuracy for generated images would be much higher than 60%.”””\n\nIt is unfair to compare our results to the hypothetical human labeling performance of digits generated by a GAN trained on MNIST. While MNIST may have the same number of semantics modes (10) as our SC09 digit dataset, these datasets are quite different in terms of dimensionality. Images in MNIST can be seen as vectors in 784-dimensional (28x28) space, whereas waveforms in SC09 are vectors in 16000-dimensional space. Higher dimensionality does not necessarily equate to greater difficulty to generative modeling, but it certainly should discourage direct comparison.\n\nWe argue that our results are indeed “promising”. We developed and compared multiple methods for generating audio waveforms with GANs, a first for the field. Our results are analogous to early papers in image generation with GANs (e.g. DCGAN from ICLR 2016), and such results laid groundwork for remarkable breakthroughs in high-resolution image synthesis.\n\n** Clarifying inception scores **\n\n“““If WaveGAN is what the paper introduces to overcome the non-invertible issue, it is confusing to see that SpecGAN outperforms WaveGAN by a large margin in Inception score. And I think, 58% accuracy for WaveGAN vs 66% for SpecGAN cannot be said to be similar. The non-invertible issue is not a issue.”””\n\nWhile many GAN papers use inception score as a primary evaluation metric, we state that our intention is to use human evaluations as our primary metric: “While inception score is a useful metric for hyperparameter validation, our ultimate goal is to produce examples that are intelligible to humans. To this end, we measure the ability of human annotators...”\n\nIn our updated manuscript, we additionally hypothesize a reason behind the discrepancy between inception scores and subjective quality assessments: “this discrepancy [between inception score and human assessments of quality] can likely be attributed to the fact that inception scores are computed on spectrograms while subjective quality assessments are made by humans listening to waveforms.”\n\nFurthermore, while the focus of our paper is on WaveGAN as it is a non-trivial application of GANs to audio generation, we acknowledge that spectrogram-based methods also achieve reasonable results for our task despite audio quality issues: “We see promise in both waveform and spectrogram audio generation with GANs; our study does not suggest a decisive winner.”", "“““Phase shuffle increases Inception scores substantially (4.12->4.67) in WaveGAN, but deteriorate Inception score in SpecGAN. And there is no discussion about this.”””\n\nApplied to spectrograms, phase shuffle is a radically different operation than it is for waveform because spectrograms have more compact temporal axes, and we fill in jittered samples with padding. This means that, in the worst case, a SpecGAN discriminator with minimum phase shuffle (n=1) may be observing 468ms (nearly half the example) of padded waveform. On the other hand, a WaveGAN discriminator with n=1 observes a worst case of 83ms of padded waveform.\n\nWe have added a sentence to our paper: “Phase shuffle decreased the inception score of SpecGAN, possibly because the operation has an exaggerated effect when applied to the compact temporal axis of spectrograms.”", "Thank you for your feedback. We appreciate that you found our application to be interesting. We will address your criticisms in order.\n\nWe noticed you changed your score from a 6 to a 5 without updating the text of your review. We would be happy to address your concerns if you can provide additional context as to the reasoning behind your rating change.\n\n“““ the algorithmic contribution is limited. ”””\n\nWe would like to reiterate that our paper is the first to apply GANs to audio generation which is not as straightforward as simply adapting existing models. Specifically, we believe we have made concrete methodological contributions such as phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). In particular, phase shuffle was observed to increase Inception scores substantially (4.1->4.7), and, to our ears, made the difference between spoken digits that were intelligible and those that were unintelligible.\n\nSpoken digits from WaveGAN **with** phase shuffle (more intelligible): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/quant_wavegan_ps2.wav\nSpoken digits from WaveGAN **without** phase shuffle (less intelligible): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/quant_wavegan.wav\n\n“““ Qualitative ratings are poor. ”””\n\nAs our task seeks to evaluate how well GANs can capture the semantic modes (vocabulary words in this case) of the training data, the primary qualitative metric to pay attention to should be the labeling accuracy. We believe our results of around 60% accuracy for generated data show that our approach is promising (note that random chance would be 10%).\n\nOn the subject of the qualitative ratings, our primary goal with this work is to provide a reasonable first pass at this problem, as well as define a task with clear and reproducible evaluation methodology to allow ourselves and others to iterate further. We believe our qualitative results are adequate, but note that improving these scores is a promising avenue for future work by integrating recent breakthroughs in image processing such as spectral normalization (Miyato et al. ICLR 2018) and progressive growth (Kerras et al. ICLR 2018).\n\n“““ The important problem of generating variable-length audio is untouched. ”””\n\nWe were the first to tackle fixed-length audio generation with GANs, a task which is already useful for application in several creative domains that we mention in the paper (music production, film scoring). We hope to build on our results in future work to address the challenging problem of generating variable-length audio.", "We would like to thank all of the reviewers for their thoughtful comments and suggestions. We have uploaded a new version of our manuscript with improvements based on reviewer feedback. Reviews were all positive for our paper (though one reviewer has since lowered their score without explanation), with reviewers highlighting the promising nature of our results as well as the clarity and reproducibility of our paper. We will respond to specific comments from each reviewer separately. If reviewers would like to provide additional context behind their scores we would be happy to provide feedback.", "Thank you for your thoughtful comments and suggestions. We will respond to each of your points below.\n\n** Explicit mention of methodological limitations **\n\nWe have updated the abstract and introduction to clarify that our model produces fixed-length results. We added the following sentence to our abstract “WaveGAN is capable of synthesizing one second audio waveforms with temporal coherence, suitable for sound effect generation.” We also added a similar clarification to paragraph 5 of the introduction (specifying that the generated waveforms are one second in length).\n\n** Justification for spectrogram pre-processing **\n\f\nWe added justification for our spectrogram preprocessing to the last paragraph of page 4.\n\n** Discussion of existing methods (e.g. WaveNet) **\n\n“““The paper dismisses existing generative methods early in the evaluation phase … it would have been beneficial to discuss and understand the failures of existing methods in more detail to convince the reader that a fair attempt has been made to getting competitors to work before leaving them out entirely”””\n\nWe had originally included some of these details in our paper but they were cut for brevity. We agree that we cut too much, and have added details back into the paper in the form of a new Appendix section (Appendix C) with a pointer from the main paper. A summary follows:\n\nHow autoregressive waveform models (e.g. WaveNet) factor into the story and evaluation of our paper is a tricky subject, and one that we tried to handle thoughtfully. First and foremost: *the two public implementations of WaveNet that we tried simply failed to produce reasonable results* (sound examples can be heard at the bottom of http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com ). We did informally pre-screen these results ourself (and you can as well) and concluded that they were clearly noncompetitive. We also calculated Inception scores for these experiments: they were 1.067 +- 0.045 and 1.293 +- 0.027 respectively.\n\nWe reasoned that including these (poor) numbers in our results table would send the wrong message to readers. Namely, it would appear that we are claiming our methods works better than WaveNet. *This is NOT a claim that we are attempting to make*, as WaveNet was developed for a different problem (text-to-speech) than the one we are focusing on (learning the semantics of short audio clips). WaveNet additionally has no concept of a latent space, which would not allow for the same steerable exploration of sound effects that our model aspires to achieve (outlined in the introduction). Furthermore, we expect that the proprietary implementation of WaveNet would produce something more reasonable for our spoken digits task, but unfortunately we do not have access to it.\n\n** User study clarification **\n\n“““ It is unclear to me how many people annotated the individual samples? ”””\n\nWe have 300 examples of each digit, resulting in 3000 total labeling problems (name the digit 1-10). We give these to 300 annotators in random batches of 10 examples, and ask for qualitative assessments at the end of each batch. Accordingly, we have 300 responses to each qualitative metric (quality, easy, diversity). Standard deviations for MOS scores are around 1 for each category, resulting in small standard errors (~0.06) for n=300. We have added the standard deviations to our paper table and updated the text in Section 6.3 to clarify these details.\n\n“““ Consider including a reflection on (or perhaps even test statistically) the alignment between the qualitative diversity/quality scores and the subjective ratings to justify the use of the objective scores in the training/selection process ”””\n\nThe evaluation of generative models is a fraught topic and the lack of correlation between quantitative and qualitative metrics is known (see “A note on the evaluation of generative models” Theis et al. ICLR 2016). In the scope of our work, we do not have enough data points (only three for the expensive Mechanical Turk evaluations) to reach substantive conclusions about the correlation between e.g. Inception score and mean opinion scores for quality.\n\nWe hypothesize that the discrepancy between our quantitative metrics (Inception score, nearest neighbor comparisons) and subjective metrics (MOS scores) is due to the fact that the former are computed from spectrograms while the latter are from humans listening to waveforms. Unfortunately, evaluation of Inception score in the waveform domain was impractical as we were unable to train a waveform domain classifier that achieved reasonable accuracy on this classification task (note that we mention in our abstract that audio classifiers usually operate on spectrograms). However, we have updated our discussion to clarify: “This discrepancy can likely be attributed to the fact that inception scores are computed on spectrograms while subjective quality assessments are made by humans listening to waveforms.”", "Thank you for highlighting that our experimental results are promising. As you mentioned, we state in our paper that “though our evaluation focuses on a speech generation task, we note that it is not our goal to develop a text-to-speech synthesizer.” *We are primarily targeting generation of novel sound effects as our task.* We think this is an important task with immediate application to creative domains (e.g. music production, film scoring) and is orthogonal to the task of synthesizing realistic speech from transcripts. Our model is already capable of producing convincing results on this task for several different sound domains. Furthermore, whereas the goals for text to speech are to synthesize a given transcript, we are providing a method which enables user-driven content generation through exploration of a compact latent space of sound effects.\n\nOur purpose for focusing our evaluation on a speech generation task is to enable straightforward annotating for humans on Mechanical Turk. From the paper: “While our objective is sound effect generation (e.g. generating drum sounds), human evaluation for these tasks would require expert listeners. Therefore, we also consider a speech benchmark, facilitating straightforward assessment by human annotators.”", "This paper applies GANs for unsupervised audio generation. Particularly, DCGAN-like models are applied for generating audio. This application is interesting, but the algorithmic contribution is limited.\n \nQualitative ratings are poor. The important problem of generating variable-length audio is untouched.\n", "\n\n*Pros:*\n-\tEasily accessible paper with good illustrations and a mostly fair presentation of the results (see suggestions below).\n-\tIt is a first attempt to generate audio with GANs which results in an efficient scheme for generating short, fixed-length audio segments of reasonable (but not high) quality.\n-\tHuman evaluations (using crowdsourcing) provides empirical evidence that the approach has merit.\n-\tThe paper appears reproducible and comes with data and code.\n\n*Cons*:\n-\tPotentially a missing comparison with existing generative methods (e.g. WaveNet). See comments/questions below ** \n-\tThe underlying idea is relatively straightforward in that the proposed methods is a non-trivial application of already known techniques from ML and audio signal processing.\n\n*Significance*: The proposed GAN-based audio generator is an interesting step in the development of more efficient audio generation and it is of interest to a subcommunity of ICLR as it provides a number of concrete techniques for applying GANs to audio.\n\n*Further comments/ questions:*\n-\tAbstract/introduction: I’d suggest being more explicit about the limitations of the method, i.e. you are currently able to generate short and fixed-length audio.\n-\tSpecGAN (p 4): I’d suggest including some justification of the chosen pre-processing of spectrograms (p. 4, last paragraph). \n-\t** Evaluation: The paper dismisses existing generative methods early in the evaluation phase but the justification for doing so is not entirely clear to me: Firstly, if the inception score is used as an objective criterion it would seem reasonable to include the values in the paper. Secondly, as inception scores are based on spectrograms it could potentially favour methods using spectrograms directly (SpecGAN) or indirectly (WaveGAN, via early stopping) thus putting the purely sample based methods (e.g. WaveNet) at a disadvantage. It would seem fair to pre-screen the audio before dismissing competitors instead of solely relying on potentially biased inception scores (which was probably also done in this work, but not clearly stated…)? Finally, while not the aim of the paper, it would have been beneficial to discuss and understand the failures of existing methods in more detail to convince the reader that a fair attempt has been made to getting competitors to work before leaving them out entirely. \n-\tResults/analysis: It is unclear to me how many people annotated the individual samples? What is the standard deviation over the human responses (perhaps include in tab 1)? Consider including a reflection on (or perhaps even test statistically) the alignment between the qualitative diversity/quality scores and the subjective ratings to justify the use of the objective scores in the training/selection process.\n-\tRelated work: I think it would provide a better narrative if the existing techniques are outlined earlier on in the paper.\n" ]
[ -1, 6, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "r1gFfA7GaX", "iclr_2019_ByMVTsR5KQ", "r1lL9-zDTX", "ByxwkkouTX", "HkeNkp5xTQ", "iclr_2019_ByMVTsR5KQ", "HJgpLMhn37", "S1lL5vpRhm", "iclr_2019_ByMVTsR5KQ", "iclr_2019_ByMVTsR5KQ" ]
iclr_2019_Bye5SiAqKX
Preconditioner on Matrix Lie Group for SGD
We study two types of preconditioners and preconditioned stochastic gradient descent (SGD) methods in a unified framework. We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher information matrix. Both preconditioners can be derived from one framework, and efficiently estimated on any matrix Lie groups designated by the user using natural or relative gradient descent minimizing certain preconditioner estimation criteria. Many existing preconditioners and methods, e.g., RMSProp, Adam, KFAC, equilibrated SGD, batch normalization, etc., are special cases of or closely related to either the Newton type or the Fisher type ones. Experimental results on relatively large scale machine learning problems are reported for performance study.
accepted-poster-papers
The method presented here adapts an SGD preconditioner by minimizing particular cost functions which are minimized by the inverse Hessian or inverse Fisher matrix. These cost functions are minimized using natural (or relative) gradient on the Lie group, as previously introduced by Amari. This can be extended to learn a Kronecker-factored preconditioner similar to K-FAC, except that the preconditioner is constrained to be upper triangular, which allows the relative gradient to be computed using backsubstitution rather than inversion. Experiments show modest speedups compared to SGD on ImageNet and language modeling. There's a wide divergence in reviewer scores. We can disregard the extremely short review by R2. R1 and R3 each did very careful reviews (R3 even tried out the algorithm), but gave scores of 5 and 8. They agree on most of the particulars, but just emphasized different factors. Because of this, I took a careful look, and indeed I think the paper has significant strengths and weaknesses. The main strength is the novelty of the approach. Combining relative gradient with upper triangular preconditioners is clever, and allows for a K-FAC-like algorithm which avoids matrix inversion. I haven't seen anything similar, and this method seems potentially useful. R3 reports that (s)he tried out the algorithm and found it to work well. Contrary to R1, I think the paper does use Lie groups in a meaningful way. Unfortunately, the writing is below the standards of an ICLR paper. The title is misleading, since the method isn't learning a preconditioner "on" the Lie group. The abstract and introduction don't give a clear idea of what the paper is about. While some motivation for the algorithms is given, it's expressed very tersely, and in a way that will only make sense to someone who knows the mathematical toolbox well enough to appreciate why the algorithm makes sense. As the reviewers point out, important details (such as hyperparameter tuning schemes) are left out of the experiments section. The experiments are also somewhat problematic, as pointed out by R1. The paper compares only to SGD and Adam, even though many other second-order optimizers have been proposed (and often with code available). It's unclear how well the baselines were tuned, and at the end of the day, the performance gain is rather limited. The experiments measure only iterations, not wall clock time. On the plus side, the experiments include ImageNet, which is ambitious by the standards of an algorithmic paper, and as mentioned above, R3 got good results from the method. On the whole, I would favor acceptance because of the novelty and potential usefulness of the approach. This would be a pretty solid submission of the writing were improved. (While the authors feel constrained by the 8 page limit, I'd recommend going beyond this for clarity.) However, I emphasize that it is very important to clean up the writing.
train
[ "rJexSy6s2Q", "r1lQAt9vT7", "Hyxv3jbM6X", "Hyg6oT30h7", "HklSR4lZpm", "B1gF-Z1Za7", "B1xklEYyaQ", "S1e9NsdXsm" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a preconditioned SGD method where the preconditioner is adapted by performing some type of gradient descent on some secondary objective \"c\". The preconditioner lives in one of a restricted class of invertible matrices (e.g. symmetric, diagonal, Kronecker-factored) constituting a Lie group (which is where the title comes from). \n\nI think the idea of designing a preconditioner based on considerations of gradient noise and as well as the Hessian is interesting. However most of that work was done in the Li paper, and including the design of \"c\". This paper's contribution seems to be to work out some of the details for various restricted classes of matrices, to construct a \"Fisher version\" of c, and to run some experiments. \n\nThe problem is that I don't really buy the original motivation for the \"c\" function from the Li paper, and the newer Fisher version of c proposed in this paper doesn't seem to have any justification at all. I also find that the paper in general doesn't do a good job of explaining its various choices when designing the algorithm. This could be somewhat forgiven if the experimental results were strong, but unfortunately they are too limited, and marred by overly-simplistic baselines that aren't properly tuned.\n\n\nMore detailed comments below\n\nTitle:\n\nI think the title is poorly chosen. The paper doesn't use Lie groups or their properties in any significant way, and \"learning\" is a bad choice of words too, since it involves generalization etc (it's not merely the optimization of some function). A better title would be \"A general framework for adaptive preconditioners\" or something.\n\nIntro:\n\nCitation of Adagrad paper is broken\n\nThe literature review contained in the intro needs works. I wouldn't call methods like quasi-Newton methods \"convex optimization methods\". Those algorithms were around a long time ago before \"convex optimization\" was a specific topic of study and are probably *less* associated with the convex optimization literature than, say, Adagrad is. And methods like Adagrad aren't exactly first-order methods either. They use adaptively chosen preconditioners (that happen to be diagonal) which puts them in a similar category to methods like LBFGS, KFAC, etc.\n\nIt's not clear at this point in the paper what it means for a preconditioner to be \"learned on\" something. \n\nSection 2:\n\nThe way you discuss quadratic approximations is confusing. Especially the sentence \"is the sum of approximation error and constant term independent of theta\" where you then go on to say that a_z does depend on theta. I know that this second theta is the \"current theta\" separate from the theta as it appears in the formula for the approximation but this is really sloppy. Usually people construct the quadratic approximation in terms of the *change in theta* which makes such things cleaner.\n\nYou should explain how eqn 8 was derived since it's so crucial to everything that follows. Citing a previous paper with no further explanation really isn't good enough here. Surely with all of the notation you have already set up it should be possible to motivate this criterion somehow. The simple fact that it recovers P = H^-1 in the noiseless quadratic case isn't really good enough, since many possible criteria would do the same.\n\nI've skimmed the paper you cited and their justification for this criterion isn't very convincing. There are other possible criteria that they give and there doesn't seem to be a strong reason to prefer one over the other.\n\n\nSection 3:\n\nThe way you define the Fisher information matrix corresponds to the \"empirical Fisher\", since z includes the training labels. This is different from the standard Fisher information matrix.\n\nHow can you motivate doing the \"replacement\" that you do to generate eqn 12? Replacing delta theta with v is just notation, but how can you justify replacement of delta g with g + lambda v? This isn't a reasonable approximation in any sense that I can discern. Once again this is an absolutely crucial step that comes out of nowhere. Honestly it feels contrived in order to produce a connection to popular methods like Adam.\n\nSection 4: \n\nThe prominent use of the abstract mathematical term \"Lie group\" feels unnecessary and like mathematical name-dropping. Why not just talk about certain \"classes\" of invertible matrices closed under standard operations (which would also help people that don't know what a Lie group is)? If you are going to invoke some abstract mathematical framework like Lie groups it needs to actually help you do something you couldn't otherwise. You need to use some kind of advanced Theorem for Lie groups. \n\nWithout knowing the general form of R equation 18 is basically vacuous. *any* matrix (in the same class) could be written this way.\n\nI've never heard of the natural gradient being defined using a different metric than the Fisher metric. If the metric can be arbitrary then even standard gradient descent is a \"natural gradient\" too (taking the Euclidean metric). You could argue for a generalized definition that would include only parametrization independent metrics, but then your particular metric wouldn't obviously work.\n\n\nSection 6:\n\nRather than comparing to Batch Normalization you would be better off comparing to the old centering and normalization work of Schraudolph et al which the former was based on, which is actually a well-defined preconditioner.\n\nSection 7: \n\nYou really need to sweep over the learning rate parameters for optimizers like SGD with momentum or Adam. Otherwise the comparisons aren't very interesting. \n\n\"Tikhonov regularization\" should just be called L2-regularization\n\n", "1, Section 4.4, add a note on preconditioned gradient norm clipping and its relationship to trust region method due to its importance in practice.\n\n2, Section 6.3, add a few comments on complexity comparison with KFAC. KFAC requires inversion of symmetric matrices, and thus might fail to scale up to large scale problems since generally, it is difficult to efficiently inverse a matrix in parallel (to our knowledge and experiences). Our methods require back substitution, which is as computationally cheap as matrix multiplication on GPU (given enough resources for parallelization). Thus, our methods could scale up to large scale problems. \n\nPlease check code (in our pytorch implementation, misc dir)\nhttps://github.com/lixilinx/psgd_torch/blob/master/misc/benchmark_mm_trtrs_inv.py\nfor details. For linear system with dimension 1024, back substitution is about 300 times faster than matrix inversion on 1080 ti GPU. For dimension 8192, back substitution is about 2000 times faster. \n\n3, Sections 7.2 and 7.3, fine tune performance of SGD, momentum and Adam, especially on the language modeling task since momentum and Adam perform poorly due to the sparsity of gradients. For this task, we found that:\n\nMomentum: diverge when step size > =0.2; converges when step size <= 0.05. Convergence is too slow. Then we tried to clip the updates as in clipped SGD method to avoid divergence when large step size is used. Still, momentum method performs the worst.\n \nAdam: Fine tune its damping factor improves performance. Its performance is rather sensitive to the damping factor. As the momentum method, Adam also destroys the sparsity of gradients and performs not so well.\n\nSparse Adam: it only updates the 1st and 2nd moments and model parameters when their corresponding gradients are not zeros. This slightly improves performance, but still far from SGD’s performance.\n \nOne may argue that SGD is a special case of momentum and Adam, and thus they should perform as well as SGD after fine tuning all their parameters. Well, we do not agree for two reasons. First, jointly fine tuning all these parameters are too expensive. For example, Adam has four parameters to tweak. Second, as their names suggest, certain parameters are expected to have their typical values. For example, a momentum method with momentum 0 is just SGD, but not a typical momentum method.\n\n4, Appendix A, a short note showing that our preconditioners preserve the sparsity of gradients in the language modeling task. \n\n5, Scattered minor revisions and clarifications in the text. Many are due to reviews’ comments, and we appreciate it.", "[Comment 1]: …most of that work was done in the Li paper…\n[Response 1]: Our contributions include: propose a new framework for learning preconditioners on Lie groups; predict useful new preconditioners and optimization methods; reveal its relationships to many existing methods (ESGD, batch normalization, KFAC, Adam, RMSProp, Adagrad); compare Newton and Fisher type preconditioners; implementations and empirical performance study.\n\n[Comment 2]: … I don't really buy the original motivation for the \"c\" function…doesn't seem to have any justification at all…\n[Response 2]: We are willing to know the reasons.\n\n[Comment 3]: …experimental results…too limited…overly-simplistic baselines…aren't properly tuned…\n[Response 3]: You may find more comparison results on small scale problems like MNIST related in our implementation packages. The image recognition and NLP tasks considered in the paper are representative, and baselines already achieved reasonable performance. Still, we are willing to fine tune them and update the results during the rebuttal period.\n\n[Comment 4]: …title is poorly chosen…doesn't use Lie groups…in any significant way… \"learning\" is a bad choice of words…merely the optimization of some function…\n[Response 4]: Solving for the optimal preconditioner is a tracking problem since generally the Hessian changes along with parameters, and also an estimation problem due to the existence of gradient noises. So we think ‘learning’ is a proper word. Lie group provides a concise framework for our study, and enables efficient learning via natural gradient descent.\n\n[Comment 5]: I wouldn't call methods like quasi-Newton methods \"convex optimization methods\"...less associated with the convex optimization literature…\n[Response 5]: Quasi-Newton methods are derived assuming nonnegative definite Hessian, and are taught in convex optimization textbooks.\n\n[Comment 6]: Citation of Adagrad paper is broken... methods like Adagrad aren't exactly first-order methods...\n[Response 6]: We state that Adagrad is a variation of SGD. We do not state that it is a first-order method.\n\n[Comment 7]: The way you discuss quadratic approximations is confusing...a_z...really sloppy...people construct the quadratic approximation in terms of the *change in theta*...\n[Response 7]: We will explicitly point out that a_z only contains higher order approximation errors in the revised paper. \nYou can construct quadratic approximation in terms of either theta or the change in theta.\n\n[Comment 8]: You should explain how eqn 8 was derived...Citing a previous paper...isn't good enough…I've skimmed the paper you cited... justification for this criterion isn't very convincing... \n[Response 8]: We believe these topics are thoroughly addressed in the cited paper. This is a conference paper with recommend page length 8. Nevertheless, we reviewed important facts in the background section, e.g., Eq. (9), the correspondence to Newton method regardless of the existence of nonconvexity and gradient noises.\n\n[Comment 9]: The way you define the Fisher information matrix corresponds to the \"empirical Fisher\"...\n[Response 9]: We will emphasize it in the revised paper. We already emphasized it in our implementations.\n\n[Comment 10]: How can you motivate doing the \"replacement\" …how can you justify replacement of … This isn't a reasonable approximation in any sense... comes out of nowhere…it feels contrived to…\n[Response 10]: The math in the paper is clear. No approximation is involved here.\n\n[Comment 11]: … use of the abstract mathematical term \"Lie group\" feels unnecessary…mathematical name-dropping…Why not … \"classes\" of invertible matrices closed under standard operations…You need to use some kind of advanced Theorem for Lie groups.\n[Response 11]: Matrix Lie group is the precise term here. We use its properties to design the preconditioners and their learning rules.\n\n[Comment 12]: … equation 18 is basically vacuous…\n[Response 12]: It is Amari’s natural gradient or Cardoso’s relative gradient on the Lie group.\n\n[Comment 13]: I've never heard of the natural gradient being defined using a different metric than the Fisher metric…your particular metric wouldn't obviously work.\n[Response 13]: Please check Amari’s work on natural gradient.\n\n[Comment 14]: Rather than comparing to Batch Normalization you would be better off comparing to the old centering and normalization work of Schraudolph…\n[Response 14]: Please give further details like Schraudolph’s paper, link, code implementation, etc.\n\n[Comments 15]: You really need to sweep over the learning rate parameters for optimizers like SGD...\n[Response 15]: We already searched the learning rates in a large range for these methods. We are further refining the results of SGD, momentum and Adam, and update the paper during the rebuttal period.\n\n[Comment 16]: \"Tikhonov regularization\" should just be called L2-regularization\n[Response 16]: We will call it L2-regularization in the revised paper.", "Author proposes general framework to use gradient descent to learn a preconditioner related to inverse of the Hessian, or the inverse of Fisher Information matrix, where the inverse may take a particular form, ie, Kronecker-factored form like in KFAC. I have tracked down the implementation of this method by author from earlier paper Li 2018 and verified that it works and speeds up convergence of convolutional networks in terms of number of iterations needed. In particular, Kronecker Factored preconditioner using approach in the paper worked better in terms of wall-clock time on MNIST LeNet5, comparing against an existing PyTorch implementation of KFAC from César Laurent.\n\n\nSome comments on the paper:\n\nSection 2\nThe key seems to be equation 8. The author provides loss function, the minimum is what is achieved by inverse of the Hessian. Given the importance of the formula, it feels like proof should be included (perhaps in Appendix).\n\nJustification of the criterion is relegated to earlier work in Li (https://arxiv.org/pdf/1512.04202.pdf), but I failed to fully grasp the motivation. There are simpler criteria being introduced, such as criterion 1, equation 17, which simply minimizes the difference between predicted gradient delta and observed, why not use that criterion?\n\nThe justification is given that using inverse Hessian may \"amplify noise\", which I don't buy. When using SGD to solve least-square regression, dividing by Hessian does not have a problem of amplifying noise, so why is this a concern here?\n\n\nSection 3\n\nThe paper should make it clear that empirical Fisher matrix is used, unlike \"unbiased estimate of true Fisher\" which used in many natural gradient papers.\n\nSection 4\nIs \"Lie group\" used anywhere in the derivations? It seems the same algebra holds even without that assumption. The motivation for using \"natural gradient for learning Q\" seems to come from Amari. I have not read that paper, how important it is to use the \"natural\" gradient for learning Q? What if we use regular gradient descent for Q?\n\nSection 7\nFigure 1 showed that Fisher-type criterion didn't work for toy problem, it would be more informative if it used square root of Fisher-type criterion. The square root comes out of regret-analysis (ie, AdaGrad uses square root of gradient covariance)\n", "[Comment 1]: The key seems to be equation 8. ...it feels like proof should be included...\n[Response 1]: We believe this equation is thoroughly studied in Li’s work.\n \n\n[Comment 2]: Justification of the criterion\n[Response 2]: Let us consider three cases to compare these criteria.\n \nCase 1, noiseless gradient, positive definite Hessian. All preconditioners in Li’s work are equivalent, leading to the same secant equation, delta g = H * delta x.\n \nCase 2, noiseless gradient, indefinite Hessian. Only criterion c3 in Li’s work can guarantee positive definiteness of the preconditioner. One may point out that other criteria can yield positive definite preconditioner under Wolfe conditions. But the resultant preconditioner is remotely related to the Hessian. We are seeking a preconditioner whose eigenvalues are the inverse of the absolute eigenvalues of Hessian to precondition the Hessian perfectly.\n \nCase 3, noisy gradient, positive definite or indefinite Hessian. Only criterion c3 in Li’s work leads to a preconditioner that still corresponds to the secant equation delta g = H * delta x for math optimization, i.e., eq. (9) shown in our paper.\n\n \n[Comment 3] The justification is given that using inverse Hessian may \"amplify noise\", which I don't buy...\n[Response 3]: Gradient noise amplification may cause little concern for well-conditioned problems. Your least-square regression problem might fall into this case. But it could lead to divergence for ill-conditioned problems, e.g., learning recurrent networks requiring long term memory. Fig. 6 in Li’s work shows one such example. \n\nUsing SGD as the base line, a good preconditioner actually suppresses the gradient noise. Our Tensorflow implementation gives one RNN training example using batch size 1. SGD fails to converge with batch size 1, although it converges with much large batch sizes. Our methods converge well with batch size 1 since the preconditioners also suppress gradient noises.\n\n \n[Comment 4]: The paper should make it clear that empirical Fisher matrix is used...\n[Response 4]: We will emphasize it in the revised paper. We already emphasized it in our implementation packages. \n\n\n[Comment 5]: Is \"Lie group\" used anywhere in the derivations...\n[Response 5]: These are great questions. We choose to learn the preconditioner on Lie group due to our years of practices in neural network training. Properties of Lie group are repeatedly exploited by our methods. For example, you mentioned that ‘same algebra holds even without that assumption’. Well, it is true because Q and Q + delta Q are already on the same Lie group. Otherwise, this is not necessarily true. For example, if you constrain Q to be a band matrix, generally, you may not able to write delta Q as -(step size)*R*Q, where R is a band matrix similar to Q.\n\nWhy natural gradient? Once we decide to learn the preconditioner on the Lie group, then gradient on the Lie group is just the natural gradient derived from a tensor metric. On the theoretical aspects, both Amari and Cardoso give a lot of justifications for natural gradient, i.e., equivariant property, fast convergence, etc. In practice, it helps a lot as we can use normalized step size to update the preconditioner. We rarely feel the need to tune this step size (0.01 as default value and works well). \n\nCan we use regular gradient descent? Let us consider two cases.\n\nCase 1, Q is on a Lie group. Yes, we can use regular gradient descent. But the updating step size may require fine tuning for each specific problem. Convergence could be slow when initial values for Q is either too large or too small. Precautions are required to prevent Q converging to singular matrices. \n\nCase 2, Q is not on any Lie group. Regular gradient descent still works. Similar difficulties are: how to choose the updating step size; how to determine the initial value. For example, the authors have considered preconditioner with form P = (scalar)*I + U*U^T. For math optimization, we already know how to update this preconditioner (limited-memory BFGS). For stochastic optimization, the authors failed to find an efficient and yet tuning-free updating methods for such preconditioner. However, we do not exclude the existence of such preconditioner updating methods. \n\n\n[Comment 6]: ...it would be more informative if it used square root of Fisher-type criterion...\n[Response 6]: We used square root Fisher-type preconditioner. We will clarify it in the revised paper. \n\nBy the way, our Pytorch implementation gives demo showing the usage of both square root and regular Fisher type preconditioners. For small scale problems like MNIST, the Fisher type preconditioner may perform better. For large scale problems, the square root Fisher type preconditioner seems more numerically robust, less picky on the damping factor. So we use the square root Fisher type preconditioner in experiment 2 and 3.", "[Comment 1]: The \"Lie\" in the title is (technically correct, but) a bit misleading, as only matrix groups were used.\n[Response 1]: We will use ‘matrix Lie group’ in the title after revision. In the text, we already point out that Lie group in the paper refers to the matrix Lie group.", "Just a quick response to AnonReviewer1's comment stating that 'I've never heard of the natural gradient being defined using a different metric than the Fisher metric'. This is not true. Please check Amari's classic paper, Natural Gradient Works Efficiently in Learning, sections 3.3, 3.4, 7, and 8, for examples of natural gradient on Lie groups. \n\nActually, considering that some readers might not be familiar with natural gradient, we have a note at the end of section 4.1 of our paper to remind the difference between a natural gradient derived from the Fisher metric and a natural gradient derived from a tensor metric. \n\nWe thank reviewers for their time and efforts, and will improve our paper accordingly.\n\n\n", "The authors suggest and analyse two types of preconditioners for optimization, a Newton type and a Fisher type preconditioner.\n\nThe paper is well written, the analysis is clear and the significance is arguably given. The authors run their optimizers on a synthetic benchmark data set and on imagenet.\nThe originality is not so high as the this line of research exists for long. \nThe \"Lie\" in the title is (technically correct, but) a bit misleading, as only matrix groups were used.\n" ]
[ 5, -1, -1, 8, -1, -1, -1, 7 ]
[ 5, -1, -1, 5, -1, -1, -1, 3 ]
[ "iclr_2019_Bye5SiAqKX", "iclr_2019_Bye5SiAqKX", "rJexSy6s2Q", "iclr_2019_Bye5SiAqKX", "Hyg6oT30h7", "S1e9NsdXsm", "iclr_2019_Bye5SiAqKX", "iclr_2019_Bye5SiAqKX" ]
iclr_2019_ByeMB3Act7
Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks
Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% precision@5 with the original softmax layer prediction, while state-of-the-art (Zhang et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% precision@5 for the same task.
accepted-poster-papers
This paper introduces an approach for improving the scalability of neural network models with large output spaces, where naive soft-max inference scales linearly with the vocabulary size. The proposed approach is based on a clustering step combined with per-cluster, smaller soft-maxes. It retains differentiability with the Gumbel softmax trick. The experimental results are impressive. There are some minor flaws, however there's consensus among the reviewers the paper should be published.
test
[ "HkeASf47AX", "HyetbGEm0m", "SJl5TbNQAQ", "Hyx4lWEXRX", "BylZTkmc3X", "ByxDcLsR2X", "HyehHMVFnm", "S1gF35P_nQ", "B1eYqyEt3m" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "We want to thank the reviewer for the useful suggestions!!\n\n-- about larger vocabulary experiment:\n\nWe have added an experiment with a much larger dataset --- Wikitext103 with vocabulary size of 80k. The result of prediction time speedup versus accuracy is shown in Figure 9 in the new version. As you can see from the figure, we can achieve more than 15x speedup with accuracy of 99.8%. In addition, in Table 3, we show the result on DE-EN, an NMT task with vocabulary size around 25k. We summarize the vocabulary size of all the datasets in Table 1. \n\n-- about result on speed-up of L2S over full softmax with respect to the vocabulary size\n\nWe have included an experiment of prediction time speed-up versus vocabulary size on PTB dataset. Results are summarized in Figure 8. In this figure, we could observe that our method can achieve higher speed-up with larger vocabulary size.\n\n-- about clustering parameters and label sets\n\nWe have added Table 7 to show the label sets learned from our method. We observe some interesting clusters---some words with similar meanings are in the same cluster.\n\n", "Thanks for your comments and that you enjoyed reading the paper! \n\nResponses to questions:\n\n-- about larger vocabulary experiment:\n\nWe have added an experiment with a much larger dataset --- Wikitext103 with vocabulary size to be 80k. The result of prediction time speedup versus accuracy is shown in Figure 9 in the new version. As you can see from the figure, we can achieve more than 15x speedup with accuracy of 99.8%. In addition, in Table 3, we show the result on DE-EN, an NMT task with vocabulary size around 25k. We summarize the vocabulary size of all the datasets in Table 1. \n\n-- about perplexity and probability estimation\n\nThis is a great point. We agree that our method tends to generate better approximation of ranking of the words instead of probability of that word. The main reason for the reduced gain for PPL is that to compute PPL, after performing our method (L2S), we need an additional step to assign a probability to words that are not located in the predicted cluster, although this is a rare case (less than 5% chance). There are several potential ways to model this rare case and we chose to use SVD to approximate probability (same as svd softmax [Kyuhong Shim et.al in NIPS 2017]); however, SVD itself has lots of computational overhead. Therefore prediction time speedup is less pronounced for PPL than for the accuracy results. \n\nOn the other hand, we get reasonable probability estimation when the word is within the predicted cluster (usually they are top-k predicted words). Therefore we still achieve very good (>10x) speed up in NMT tasks with beam search (see Table 3). \n\n\n-- about qualitative analysis \n\nWe have added two qualitative analyses in the new version. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures--some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with original softmax results are shown in Table 8. \n", "We are thankful for the constructive comments!!\n\n-- about word clusters are not continuous and training end to end \n\nThere are several ways to make word clusters continuous such as using soft clustering, however, these strategies on the other hand will increase the prediction time. Even though word clusters representation is not continuous in L2S, our model can still train end-to-end in the sense that the clustering stage and the label selection are trained jointly with the gumbel technique. Our algorithm back-propagates the gradient to the clustering weights to update both clustering partition and label sets simultaneously. \n\n-- about speeding up training time\n\nWe focus on speeding up prediction in this work. We could potentially use the same idea--clustering+learning candidate words, to speed up training as well since we could narrow down the update on a few candidate words instead of the entire vocabulary when updating softmax’s weight matrix. This is certainly an interesting future direction to work on.\n\n-- qualitative examples\n\nWe have added two qualitative analyses in the new version. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures---some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with full softmax results are shown in Table 8. \n", "Hi all,\n\nWe appreciate the constructive feedback from the reviewers and the community. And thanks for the patience for waiting our responses. We have made the following main changes to the current version to make our paper more complete.\n\n1. For NMT task, we apply our method on a new dataset EN-VE translation with vocabulary size of 22749. Results are summarized in Table 3. For this task, our method can achieve 20x speedup with BLEU score of 25.27, and the original softmax’s BLEU is 25.35.\n\n2. Besides additional NMT experiment, we perform our algorithm on a larger vocabulary dataset Wikitext-103, a language model dataset with 80k vocabularies. Results are summarized in Figure 9. For this task, our method can achieve more than 15x speedup with P@1 at 99.8%.\n\n3. We also include an experiment on prediction time speed-up versus vocabulary size on PTB dataset. In this experiment, we vary the vocabulary size and show the speedup and accuracy. Results are summarized in Figure 8, showing that our method achieves higher speed-up with larger vocabulary size. \n\n4. We add two qualitative analysis in the appendix. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures--some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with full softmax results are shown in Table 8. Please look through those interesting examples!\n", "Hi there,\n\nThanks for your interest and useful clarifying questions !!!\n\n1) You are right. We didn't train the context vector jointly with approximation. Our problem setup is given a pre-trained NLM, how to speed up the inference operations.\n\n2) Firstly we need to point out after training the cluster label set (c_t) and clustering weights (v_t), we will just select the cluster by choosing the one with maximal z(h) in eq(2). That is to say, in the inference time, given a hidden state h, the corresponding selected cluster is fixed. Apparently there is no guarantee the ground truth token will be in the selected cluster, but our training objective function tries to make the predicted candidate set contains the ground truth token. \n\n\n3) Sorry for the confusion, I think we will reconsider how to rephrase the scenario. We are not trying to approximate \"next-word-prediction accuracy\" but to approximate \"next-work-prediction operation\". \n\nSince in LM and NMT, next-word-prediction is done by taking the maximal inner product between context vector h and Softmax layer W, we refer \"next-word-prediction\" as the operation to do so. \nWe didn't consider the true \"next-word-prediction accuracy\" because even for taking the original maximal inner product between softmax W and h, it will only give us around 26% accuracy for P@1 when compared to ground truth token. To increase this accuracy actually means to improve the performance of the model over original W. For this work, we focus on making a given pre-trained LM/NMT faster in prediction time but not making a pre-trained LM/NMT having higher accuracy. Therefore, we try to approximate softmax W (the real operation to generate next word) instead of matching ground-truth label by clustering-based thinking. \n\n\n4) In section 4.2 and corresponding table 2, we do try to add \"%\" there. We report the BLEU scores which is within .5% difference when compared to the original BLEU score. For example, in NMT: DE-EN Beam=5 row in table 2 we get 13.4 times speed-up with BLEU score drops from 30.33 to 30.19. If we consider the ratio (30.33 - 30.19) / (30.33) which is around 0.0046 ~= 0.46%. Whereas, \".5\" BLEU score would be (0.5)/30.33 ~= 1.65% which is 3 times more loss. \n\n\n5) Sorry for the confusion again, we will again consider rephrase the notations. We will check again all notations in particular the comma issue you mentioned. Here, we briefly reply to the dimensions of the notations you mentioned. Let's assume there is |V| vocabularies in the model. \n\nFor c_t, it in the shape of |v| x 1 vector and we are trying to make entry either 0 or 1 as a pointer of the inclusion of certain. c_{ts} is s-entry of the c_t vector, and thus is binary in the sense. c_{p_bar{h_i},s} refers to the s-entry of the c_{p_bar{h_i}} vector, p_bar{h_i} defined in the paper is the 1-hot entry of the Straight-Through gumbel, which can be thought as the sampled cluster. Thus c_{p_bar{h_i}} is a vector of |v| x 1 shape and c_{p_bar{h_i},s} refers to s-entry and yes it's binary eventually.\n\n\n", "This paper proposes a novel method to speedup softmax computation at test time. Their approach is to partition the large vocabulary set into several discrete clusters, select the cluster first, and then do a small scale exact softmax in the selected cluster. Training is done by utilizing the Gumbel softmax trick.\n\nPros:\n1. The method provides another way that allows the model to learn an adaptive clustering of vocabulary. And the whole model is made differentiable by the Gumbel softmax trick. \n2. The experimental results, in terms of precision, is quite strong. The proposed method is significantly better than baseline methods, which is a really exciting thing to see. \n3. The paper is written clearly and the method is simple and easily understandable. \nCons:\n1. I’d be really expecting to see how the model will perform if it is trained from scratch in NMT tasks. And I have reasons for this. Since the model is proposed for large vocabularies, the vocabulary of PTB (10K) is by no terms large. However, the vocabulary size in NMT could easily reach 30K, which would be a more suitable testbed for showing the advantage of the proposed method. \n2. Apart from the nice precision results, the performance margin in terms of perplexity seems not as big as that of precision. And according to earlier discussions in the thread, the author confirmed that they are comparing the precision w.r.t. original softmax, not the true next words. This could raise a possible assumption that the model doesn’t really get the probabilities correct, but somehow only fits on the rank of the words that was predicted by the original softmax. Maybe that is related to the loss? However, I believe sorting this problem out is kind of beyond the scope of this paper. \n3. In another scenario, I think adding some qualitative analysis could better present the work. For example, visualize the words that got clustered into the same cluster, etc. \n\nIn general, I am satisfied with the content and enjoys reading the paper. \n", "This paper presents an approximation to the softmax function to reduce the computational cost at inference time and the proposed approach is evaluated on language modeling and machine translation tasks. The main idea of the proposed approach is to pick a subset of the most probable outputs on which exact softmax is performed to sample top-k targets. The proposed method, namely Learning to Screen (L2S), learns jointly context vector clustering and candidate subsets in an end-to-end fashion, so that it enables to achieve competitive performance.\n\nThe authors carried out NMT experiments over the vocabulary size of 25K. It would be interesting if the authors provide a result on speed-up of L2S over full softmax with respect to the vocabulary size. Also, the performance of L2S on larger vocabularies such as 80K or 100K needs to be discussed.\n\nAny quantitative examples regarding the clustering parameters and label sets would be helpful.\nL2S is designed to learn to screen a few words, but no example of the screening part is provided in the paper.", "The paper proposes a way to speed up softmax at test time, especially when top-k words are needed. The idea is clustering inputs so that we need only to pick up words from a learn cluster corresponding to the input. The experimental results show that the model looses a little bit accuracy in return of much faster inference at test time. \n\n* pros: \n- the paper is well written. \n- the idea is simple but BRILLIANT. \n- the used techniques are good (especially to learn word clusters). \n- the experimental results (speed up softmax at test time) are impressive. \n\n* cons: \n- the model is not end-to-end because word clusters are not continuous. But it not an important factor. \n- it can only speed up softmax at test time. I guess users are more interesting in speeding up at both test and training time.\n- it would be better if the authors show some clusters for both input examples and corresponding word clusters.\n\n\n", "1) I want to confirm that you used fully pre-trained language/NMT models before learning the softmax approximation. That is, the context vectors where given and not jointly learned with the approximation?\n\n2) For the perplexity calculation, are you selecting the correct candidate set which contains the ground truth token, and then just using the low-rank approximation for all other words? Is the probability of a given word reliant on the probability of selecting that candidate set? \n\n3) When defining precision@, you say 'This measures the accuracy of next-word-prediction in LM and NMT'. However, I don't think that is quite correct. You seem to be measuring the overlap between the top words matching between the true softmax and the approximation and not if the next word actually matches the ground truth next word? So even if the true softmax got the word incorrect, you are still trying to match the true softmax. \n\n4) In section 4.2, you say '.5% BLEU'. I don't think you want the '%' there?\n\n5) I'm having some difficulty with the notation. Can you confirm that c_t, c_{ts} and c_{p(h_i), s} are all binary variables? (also the comma before the subscript s doesn't seem to be used consistently) \n\nThanks for your time. I enjoyed this paper. " ]
[ -1, -1, -1, -1, -1, 7, 6, 8, -1 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, -1 ]
[ "HyehHMVFnm", "ByxDcLsR2X", "S1gF35P_nQ", "iclr_2019_ByeMB3Act7", "B1eYqyEt3m", "iclr_2019_ByeMB3Act7", "iclr_2019_ByeMB3Act7", "iclr_2019_ByeMB3Act7", "iclr_2019_ByeMB3Act7" ]
iclr_2019_ByeSdsC9Km
Adaptive Posterior Learning: few-shot learning with a surprise-based memory module
The ability to generalize quickly from few observations is crucial for intelligent systems. In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered. These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall. We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. In addition, its memory compression allows it to scale to thousands of unknown labels. Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification. In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning.
accepted-poster-papers
All reviewers recommend acceptance. The problem is an interesting one. THe method is interesting. Authors were responsive in the reviewing process. Good work. I recommend acceptance :)
test
[ "Hkx8p7qAA7", "BJxh2pdP2X", "ryehOhG2T7", "BylMPhznTQ", "rJebKiM2pX", "rJe9NiGnpm", "HJgNL9M2TX", "SyTTVGKnQ", "r1lzbPiPnX" ]
[ "public", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "First, it is a very interesting idea.\nI wonder if the Memory store can be updated? If a point stored in the Memory store, it will be deleted in the later iteration or stored in the memory forever.\nBesides, is the Memory store with/without the upper limit?\n\nThanks.", "Summary: the authors propose a new algorithm, APL, for a few-shot and a life-long learning based on an external memory module. APL uses a surprise-based signal to determine which data points to store in memory and an attention mechanism to the most relevant points for prediction. The authors evaluate APL on a few-shot classification task on Omniglot dataset and on a number analogy task.\n\nQuality: the authors consider interesting approach to life-long learning and I really liked the idea of a surprise-based signal to choose the data to store. However, I am not convinced by the learning setting that authors study. While a digit-symbol task from the introduction is interesting to study the properties of APL, I fail to see any real world analogy where it is useful. The same happens in a few-shot omniglot classification. The authors decided to shuffle the labels within episodes that, I guess, is supposed to represent different tasks in a typical life-long learning scenario. Again, it maybe interesting to study the behaviour of the algorithm, but I don't see any practical relevance here. It would make more sense to study the algorithm in a life-long learning setting, for example, considered in [1] and [2].\n\nClarity: the paper is well-written in general. I failed to decode the meaning behind the paragraph under Figure 3 on page 4 and would advise the authors to re-write it. The same goes to the first paragraph on page 3.\n\nOriginality: the paper builds on the prior work of Kaiser et al., 2017 and Santoro et al., 2016, but the proposed modifications are novel to my best knowledge.\n\nSignificance: below average: the paper combines interesting ideas that potentially can be used in different learning contexts and with other algorithms, however, the evaluation does not show the benefit in an obvious way.\n\nOther comments: \n* throughout the whole paper it is not clear if the embeddings are learned or not. I suppose they are, but what then happens to the ones in memory? If they are not, like in ImageNet example, where do they come from?\n* the hyperparameter \\sigma: the authors claim \"the value of \\sigma seems to not matter too much\". Matter for what? It's great if the performance is stable for a wide range of \\sigma, but it seems like it should have a great influence over the memory footprint of APL. I feel this is an important point that needs more attention.\n* it would be interesting to see how APL performs with a simple majority vote instead the decoder layer. This would count for an ablation study and could emphasize the role of the decoder.\n* Figure 4, b) plots are completely unreadable on black-and-white print, the authors might like to address that\n* In conclusion, the first claim about state-of-the-art accuracy with smaller memory footprint: I don't think that the results of the paper justify this claim.\n\n[1] Yoon et al, Lifelong Learning with Dynamically Expandable Networks, ICLR 2017\n[2] Rebuffi et al, iCaRL: Incremental Classifier and Representation Learning, CVPR 2017\n\n********************\nAfter authors response:\n\nThanks to the authors for a detailed response. The introduction led me to believe that the paper solves a different task from what it actually does. I still like the algorithm and, given that the scope of the paper is limited to a few-shot learning, I tend to change my evaluation and recommend to accept the paper. It was a good idea to change the title to avoid possible confusion by other readers. The introduction is still misleading though. It creates the impression that APL solves a more general problem where it would be good enough to limit the discussion to a few-shot learning setting and explain it in greater detail for an unfamiliar reader. Some details also seem to be missing, e.g. I didn't get that the memory is flushed after each episode and could not find where this is mentioned in the paper.", ">> the hyperparameter \\sigma: the authors claim \"the value of \\sigma seems to not matter too much\". Matter for what? It's great if the performance is stable for a wide range of \\sigma, but it seems like it should have a great influence over the memory footprint of APL. I feel this is an important point that needs more attention.\n\nThanks for the excellent question! We have added a detailed analysis of the behavior of the model as a function of \\sigma in the supplementary information. To give you a quick summary \\sigma does not affect the memory footprint of APL as much as might be assumed a priori as the model exhibits 2 regimes: before being completely trained, the model basically writes everything to memory; while after being trained the model is either completely surprised with a new data point (has never seen it before, or is significantly different from what it’s seen before), or classifies it with high accuracy. Therefore the model ends up writing roughly the same number of points to memory for a wide range of \\sigma (obviously if you make it too high or too low the model breaks).\n\n>> it would be interesting to see how APL performs with a simple majority vote instead the decoder layer. This would count for an ablation study and could emphasize the role of the decoder.\n\nMatching networks can be seen as a special case of APL without a decoder and where all points are written to memory. Therefore the performance numbers for Matching Networks serve as an upper bound to the performance of APL ablated without a decoder. We will clarify this point in the text.\n\n>> Figure 4, b) plots are completely unreadable on black-and-white print, the authors might like to address that\n\nThank you, we will strive to optimize the visual presentation of these plots in the final version of the paper.", "Thank you for the useful and constructive criticism which has helped us improve the paper. We address specific concerns below.\n\n>> “The authors decided to shuffle the labels within episodes that, I guess, is supposed to represent different tasks in a typical life-long learning scenario. Again, it maybe interesting to study the behaviour of the algorithm, but I don't see any practical relevance here.”\nWe would like to stress that the tasks presented in the paper are not novel and arbitrary, but rather have been the subject of an extensive body of work in the meta-learning field [c.f. 1, 2, 3, 4, 5, 6, linked below, and further references on the related work section of this paper]. The motivation behind the label-shuffling task is a scenario where the model must learn to quickly associate high-dimensional data with a particular label (e.g. an image with a class label). Note that the models are always tested on a held-out test set of classes they have never seen before, which means in a real life scenario the model would be seeing a new class for the first time and would then immediately learn to associate subsequent data of the same class with the correct class.\n\nWe emphasize that while few-shot learning is related to life-long learning, these are different research areas with different goals: few-shot learning focuses on doing well on a single task, where the model must perform well on new data not seen during training; while life-long learning focuses on adapting to *new* tasks not seen during training.\n\n>> “It would make more sense to study the algorithm in a life-long learning setting, for example, considered in [1] and [2].”\n\nWhile APL was devised to perform well in the few-shot learning scenario as explained above, we thank the reviewer for suggesting another interesting research area where APL could also be applied. We performed follow-up experiments to replicate the setup described in reference [1] provided by the reviewer and compared APL to progressive networks, a well-known life-long learning algorithm. We show that APL performs as well or better than progressive networks even though it does not need to perform gradient descent steps at test time. A thorough investigation of APL in the life-long learning setting would be out of scope for this paper but very interesting as follow up work!\n\n>> “I failed to decode the meaning behind the paragraph under Figure 3 on page 4 and would advise the authors to re-write it. The same goes to the first paragraph on page 3.”\n\nThank you for these suggestions. We have rewritten these sections in the text in order to clarify their presentation.\n\n>> throughout the whole paper it is not clear if the embeddings are learned or not. I suppose they are, but what then happens to the ones in memory? If they are not, like in ImageNet example, where do they come from?\n\nThe embeddings are learned in the case of omniglot and the digit analogy task. Each episode is short (we start with ~40*number of classes examples and anneal the episode length as accuracy improves), and the memory is flushed between each episode, so the embeddings in the memory are never too ‘old’.\n\n[1] Vinyals, Oriol, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. “Matching Networks for One Shot Learning.” ArXiv:1606.04080 [Cs, Stat], June 13, 2016. http://arxiv.org/abs/1606.04080.\n[2] Ren, Mengye, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. “Meta-Learning for Semi-Supervised Few-Shot Classification.” ArXiv:1803.00676 [Cs, Stat], March 1, 2018. http://arxiv.org/abs/1803.00676.\n[3] Snell, Jake, Kevin Swersky, and Richard S. Zemel. “Prototypical Networks for Few-Shot Learning.” ArXiv:1703.05175 [Cs, Stat], March 15, 2017. http://arxiv.org/abs/1703.05175.\n[4] Finn, Chelsea, Kelvin Xu, and Sergey Levine. “Probabilistic Model-Agnostic Meta-Learning.” ArXiv:1806.02817 [Cs, Stat], June 7, 2018. http://arxiv.org/abs/1806.02817.\n[5] Nichol, Alex, Joshua Achiam, and John Schulman. “On First-Order Meta-Learning Algorithms.” ArXiv:1803.02999 [Cs], March 8, 2018. http://arxiv.org/abs/1803.02999.\n[6] Mishra, Nikhil, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. “A Simple Neural Attentive Meta-Learner,” July 11, 2017. https://arxiv.org/abs/1707.03141.", "Thanks a lot for your feedback. In the following we address some of the points you raised:\n\n- Representation alignment: Thank you for pointing this out. We have rewritten the corresponding section in the paper as this explanation could be made clearer. To quickly address your question, no gradients are calculated through the memory items. The weights of the encoder + decoder are optimized to minimize the cross-entropy loss for the current mini-batch alone, and then the embeddings produced by the encoder are stored in the memory (if the loss is high enough). Due to the nature of the classification problem, we expect embeddings for similar classes to be similar (in the euclidean distance case). Therefore the next time we see another example of that same class, the memory query should produce neighbors which share the same class. In this case, even though we never learn what to query or backpropagate through the memory, the query system should return the ‘correct’ set of neighbors. However this explanation is an intuitive hypothesis only and is not mathematically necessary! It could be the case that the encoder learns to produce very different embeddings for the same class, and therefore a k-nearest-neighbors query with euclidean distance would not return memories which are information. Which is why we needed to empirically verify whether the embeddings converge in the expected way or not. Our experimental results show that indeed this hypothesis is correct, and the query system works as we expected.\n\n- Alternative measure of surprise: We have added a discussion on this point as a general comment above.\n\n- Paper title: We agree that the title is quite broad and might lead to confusion amongst researchers from different areas. As a result we have extended it to better reflect the contents of the paper.\n\nThanks again for the useful feedback!", "Thanks a lot for your comments and suggestions. In the following we address three of the points you raised:\n\n1. Alternative measure of surprise: We have added a discussion on this point as a general comment above.\n\n3. Variational objective. In this paper the main idea was to test the effectiveness of the memory controller mechanism coupled with a relational decoder. It is definitely possible to adapt a variational objective in the architecture and it would be a very interesting avenue for future work. Thank you for the suggestion.\n\nAdditional experiments: as you suggested we have carried out more experiments to further consolidate our presentation of the model. We have applied APL to a set of continual learning experiments suggested by reviewer 3 and show that APL performs en par with progressive networks. These results are included the final version of the paper along with some pointers to the relevant literature.\n\nIn light of the positive nature of your reviews we hope that these comments and the additional experiments can sway you to increase your rating of the paper.\nThanks again for the useful comments!", "Several comments asked about alternatives to the surprise measure. We add a brief discussion below:\n\nThe proposed surprise measure uses the cross-entropy between the label prediction and the true label. This is equivalent to measuring how many more bits of information a perfect classifier would carry about the label as compared to the current model. In other words, this is the information we are missing about the label. To compress information ideally we want to store data which contains a lot of missing information, and discard redundant information. The approach we proposed simply thresholds on a provided number of bits derived from first principles. With more computational capacity it might be possible to optimize this threshold value via grid search, but the current value seems to provide good results.\nAnother option would be to use the classification accuracy as a proxy for surprise: if we make a mistake, then we store the data point. However this may not lead to optimal compression: suppose the case where two labels are very hard to distinguish and the correct posterior probabilities given an example from this vicinity would be [A:0.48, B:0.48, C:0.02, D:0.01, ...]. If the model predicts B instead of A in this case and we already have a few examples of A and B in memory, storing an additional example won’t help much as we will continue to make mistakes 50% of the time regardless of whether we store it or not. On the other hand, a measure based on the number of bits will be able to distinguish this case from one in which the model mistakenly places most of the probability mass on the wrong class, or simply outputs a uniform probability distribution.\n\nIt is also possible to learn a metric for surprise, for example by training a separate model which can tell us whether this input is surprising or not. This might be particularly useful in the case of unsupervised learning, where we don’t know how similar or dissimilar each data point is to the rest of the dataset. However, it is unclear whether this would help in the case of supervised learning, where there is already a natural low-dimensional representation of how examples relate to one another (i.e. the classes). Such an exploration would be interesting as future work.", "The paper proposes a novel model that reads in information, decide whether this information is surprising and hence whether or not to keep it in memory and also utilizing information in the memory to quickly adapt or reason. The authors experimented with few-shot Omniglot classification and meta learning reasoning tasks. \n\nNovelty:\n\nThe authors introduced a novel self-contained model that decides what to write to the external memory and making use of the external memory for different tasks.\n\nMy comments are mostly as follows: \n\n1. The paper is well written, the problems are clearly stated, the solution is presented in a clear way, overall very easy to follow.\n\n2. This is an interesting paper that combines a novel technique for writing to external memory based on surprisal and using it for more difficult tasks such as deductive reasoning. I really like the surprisal mechanism, there are cognitive/ neuroscience materials that supports this approach (that the brain tends to write to memory things that are surprising). This also makes total sense from a machine learning perspective. \n\n3. Could another objective be used for surprisal? Also, instead of a determinstic encoder, decoder, is it possible to use a variational objective?\n\n4. The experiments look convincing.\n\nOverall a very nice paper, nice idea, could show more resul", "In this paper, authors present an algorithm to generalize learned properties from few observations by using a memory store and a memory controller. The experiments show comparable results on few-shot classification task and better performance and scalability for when the number of labels is unknown .\n\n- The paper is well-written and easy to follow in general. The notations and model specifications are clear. \n\n- The idea of incorporating an external memory store to save previous experiences is interesting especially without the need to backpropagate through the memory at each step. It is done by alignment of a query with the embeddings that are stored in the memory using k-nearest neighbor with Euclidean distance measures. However, I am not quiet sure about how this is done in practice. It is stated in the paper that this alignment needs to emerge as a byproduct of training which is achieved by getting optimized to be as class-discriminative as possible. Isn't this implicitly optimizing part of the memory? I think more clarification would help a lot in understanding of this part.\n\n- I liked using a memory controller that decides whether a point is 'surprising'. Authors defined surprise to be negative log of prediction for label. I was wondering if they considered other measures, and investigated the effects that they might have. I think a brief discussion would be helpful.\n\n- I am not an expert in this area but the experiments look convincing in general. Results in table one corresponding to 423-way are convincing since the proposed algorithm is the only candid that is able to perform the task with relatively good performance. On imagenet data set, the results are comparable to Inception-ResNet-v2 for fixed label case. However, more in-depth experiments or settings such as top-5 accuracy are needed to justify the performance of algorithm on this data set. For the number analogy task the algorithm performs well in achieving high accuracy.\n\n- Title of the paper is too generic. From the looks of it, adaptive posterior learning should cover wider set of tasks or probabilistic models, but it does not. So to avoid confusion (and the expectation that comes with this name), I strongly suggest that the authors change the title or make it more specific to actually represent what is discussed in the paper.\n\n- In figure 4 c, I think x label should be \"class number\" not \"number of classes\". " ]
[ -1, 7, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_ByeSdsC9Km", "iclr_2019_ByeSdsC9Km", "BJxh2pdP2X", "BJxh2pdP2X", "r1lzbPiPnX", "SyTTVGKnQ", "iclr_2019_ByeSdsC9Km", "iclr_2019_ByeSdsC9Km", "iclr_2019_ByeSdsC9Km" ]
iclr_2019_ByetGn0cYX
Probabilistic Planning with Sequential Monte Carlo methods
In this work, we propose a novel formulation of planning which views it as a probabilistic inference problem over future optimal trajectories. This enables us to use sampling methods, and thus, tackle planning in continuous domains using a fixed computational budget. We design a new algorithm, Sequential Monte Carlo Planning, by leveraging classical methods in Sequential Monte Carlo and Bayesian smoothing in the context of control as inference. Furthermore, we show that Sequential Monte Carlo Planning can capture multimodal policies and can quickly learn continuous control tasks.
accepted-poster-papers
This paper presents a new approach for posing control as inference that leverages Sequential Monte Carlo and Bayesian smoothing. There is significant interest from the reviewers into this method, and also an active discussion about this paper, particularly with respect to the optimism bias issue. The paper is borderline and the authors are encouraged to address the desired clarifications and changes from the reviewers.
test
[ "rJgIdijLl4", "B1xd7Ks8gE", "HJe9yYuVlN", "SJgSEm1yg4", "BJebbLfA14", "Sklapa-0kN", "S1xzTol01E", "rJl3_Zc2y4", "HkeszuKYkE", "BJlqzpD_14", "Bkgqc0IOk4", "SylcRv_DhQ", "ByxFxKEqRQ", "rylLxlrqR7", "ryeUz24c0m", "H1laAFN9CQ", "B1x5aFNqCX", "rJerZg4c0m", "HJejH5YDnm", "SJgI4p6O2m" ]
[ "author", "author", "public", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Our final results with CEM+value function show no improved performance overall over vanilla CEM. This seems mainly due to the fact that the CEM policy and the SAC value function do not match and our value/Q losses diverge.", "Thank you for the more detailed answer, we think we finally understood the source of our disagreement. We believe we do not have the same definition of optimism bias, and while we do not suffer from any agent's delusion about the world, we do suffer from an overestimation bias of the mean return.\n\n1. In brief, the issue we believe you are talking about is the objective itself and thus intrinsic to the posterior.\nIndeed, we do maximize log-Expectation-exp(Return) which is an upper bound on the expected return. Thus maximizing our objective might not mean that we have a good expected return. This is common in many control as inference methods.\nWe are not certain what terminology is used in RL, but we would rather call that an overestimation bias.\n\n2. The optimism bias, even in psychology, is a delusion of the agent about the world, ie the agent believes the world will lead to unrealistically more desirable outcomes ( ie q(s'|s, a) = p(s'|s,a,O) instead of p(s'|s,a) ).\nThis is actually the issue we were mentioning from the beginning and we explained why we do not suffer from it.\n\nWhile point 2 is not an issue for us, point 1 as you raised is indeed one. We will add a paragraph in the final version of the paper to explain the distinction.\nWhy don't we suffer heavily from 1 then?\nOur guess is that Mujoco is very close to deterministic and our model of the world learns very rapidly to predict the next state with a very low variance, thus we believe our transitions are close to deterministic, making this less of an issue.", "Hi, I think your work is interesting and have some questions as a reader of your work. \n\n1. I cannot figure out how the i.i.d. prior for the action sequences, i.e., \\prod_{t=1}^T p(a_t), can be used. I also checked Sergey Levine's tutorial and review on \"RL as Inference\", but i.i.d. action sequences are not shown in that tutorial. Would you please clarify this part? Personally, I think this part is quite weird.\n\n2. Any plan to open your source code?\n\n3. I wonder whether you've done a wall-clock-time comparison between model-free RL, e.g., SAC, and your work. ", "Yes, your understanding of the optimism bias is incorrect.\n\nThe problem does not stem from inaccuracies in q, in fact, when q = p_env, the optimism bias is present. As you defined in your paper, p(x | O) \\propto p_env(x) \\exp(\\sum r_t). The problem is that p(x | O) incorporates exp(\\sum r_t) which biases samples of x towards states and actions with high reward. This is fine for actions, but causes an optimism bias for state transitions. Note that p(x | O) uses the real environment model and does not even depend on q, yet there is still optimism bias. As a result, under the posterior p(x | O), p(s' | s, a, O) != p_env(s' | s, a) which is the optimism bias meant by Levine.\n\nFor LQR systems, we compute p(x | y) w/o the reward (ie., no exp(\\sum r_t)) term. As a result, there is no optimism bias.\n\n\n\n", "CEM+value baseline\n===\n\nYes this is an option!\nSo the proposed algorithm would be to use regular CEM, but for the optimization to maximize \\sum_i=1^h r_i + V_{t+h+1} where V is the value function from SAC instead of just \\sum_i=1^h r_i.\n\nWe are currently running it, and we do have some preliminary results. It does seem to do better on Hopper (some seeds seem to be around 1000 of return which vanilla CEM never did, while others are still very low), we don't see improvements over CEM for HalfCheetah and Walker2d for now.\n\nHowever just a few points:\n- It seems to have some instabilities eg the value and Q-loss seem to always diverge (>>> billions while for (SIR)-SAC it is around 1 to 10). We believe it may be because it is using the value function from SAC while using the policy from CEM which could be really different. In our work, we could expect our planning policy to be closer to SAC policy as SAC policy was actually used as a proposal.\n- It is probably possible to augment CEM with a value function in a principled and more stable way, but we think it is a contribution in itself and should be explored in a full paper.", "Hello,\n\nWe believe that we have addressed the points raised in your review (notably ESS plots + complex experiments).\nDid you also have time to look at the updated version of the paper?\n\nLooking forward to hearing from you soon,\nThank you.", "Optimism bias\n===\n\nFirst, thank you for the fast answer, this is really appreciated. We furthermore had the opportunity recently to discuss with various researchers about our work and this question was raised several times. Therefore we are now convinced that it should be discussed on the final version of paper no matter what conclusion our discussion reaches.\n\nFor clarity p(s'|s, a) is what we called p_env(s'|s, a) in our work while q(s'|s, a) would be p_model(s'|s, a).\n(we were actually wondering if that would be clearer to use in the paper as well.)\n\nHow we understand the optimism bias\n---\nWhen optimizing the the posterior with a variational proposal q e.g KL(q(x)||p(x|O)), we obtain the objective described in Levine 2018, sec 2.4, eq 10. It contains the expectation under q of the reward, the expectations under q of log p(s'|s, a) and the entropy of q.\nThe important point is that we have some divergences of the transitions given by q and by p.\n\nHowever, if we maximize q with this objective, then we will learn a wrong transition model that assumes overly optimistic transitions.\nIndeed, this is because the reward signal/optimality has been used implicitly to train the transition model q(s'|s,a). The transition model is actually trying to match p(s' | s, a, O) instead of p(s' | s, a) [Levine 2018, sec 2.4, eq 9]. This is due to the fact that the factorizations of p and q are different. This learned transition model is wrong and we believe this is what is called the optimism bias.\n\nThis is corroborated by [Levine 2018 sec 3]:\n> The problematic nature of the maximum entropy framework in the case of stochastic dynamics, discussed in Section 2.3 and Section 2.4, in essence amounts to an assumption that the agent is allowed to control both its actions and the dynamics of the system in order to produce optimal trajectories, but its authority over the dynamics is penalized based on deviation from the true dynamics.\n\n\nWhy we believe we don't suffer from it and why we think Levine 2018 corroborates our view\n---\n\nThe solution proposed by Levine is to fix q(s'|s,a) to p(s'|s,a) [Levine 2018 sec 3.1], that way, q(s'|s,a) does not \"see\" the reward/optimality and thus can't be over-optimistic. This can be seen as a type of variational inference as well where some structure (here q(s'|s,a) == p(s'|s,a)) is forced into the variational distribution [Levine 2018 sec 3.2].\n\nMore generally, the issue we have to avoid is that q(s'|s,a) should NOT be trained to match p(s'|s, a, O) as jointly optimizing everything would do.\nIn our case, we specifically force q(s'|s, a) to match p(s'|s, a) by doing it explicitly and training q(s'|s, a) to match p(s'| s, a) by MLE.\n\nFiltering, control and the posterior\n---\nMore generally, targeting a posterior like we do seems to be very widespread and established in the filtering and control communities. For instance a Kalman smoother estimates perfectly the posterior p(x_{1:T} | y_{1:T}) for linear-gaussian systems. \nDo you believe that these methods also suffer from the optimism bias?\nOur understanding is that they don't as the transition model (even when imperfect) is not trained to optimize the posterior but either known, modeled by hand or estimated by MLE from transition data (as we do).\n\n\nI'd like to re-emphasize that we are open to the discussion. \n- If you can convince us that we do suffer from the optimism bias, we'll gladly add a subsection discussing it and why we think our method still works or how we could improve on it maybe.\n- If we can convince you that this is not an issue in this work, we believe we should still state in our paper why this is the case.\n\nPlease feel free to detail your thoughts and tell us exactly where you disagree with us.", "1. As done in your proposal, can the value function from SAC be used?\n\n2. The optimism bias does not stem from model error. The exact posterior with a perfect model suffers from optimism bias in stochastic environments. This is what is meant by Levine 2018.\n", " Glad to hear that the experiments are now corrected. My rating remain the same as before.", "1) “However, baselines that establish the claim that SMC improves planning which leads to improved control are missing (such as CEM + value function). “\n\nWe understand that CEM with a value function would be an interesting baseline, but we are not aware of any work that introduced what you are mentioning. Could you point us toward relevant work using CEM with a value function? For example, even the most recent work we could find using CEM (Hafner 2018) does not use a value function.\n\nFor instance, it is unclear to us how the value function should be learned. The most natural way to learn a value function, would be to do it online (ie learn the value function induced by the non-parametric CEM policy). Another alternative would be to learn a value function offline, but this would would be expensive since it would require to do a full planning step i.e. querying the generative model for multiple steps and then correcting the action chosen. Then we could either correct the expectation with importance sampling or use a Q-function similar to SAC.\n\nWe think there are many ways this could be designed, leading to various performances and behaviors: this is a very interesting direction, but we believe this would require a full paper rather than being introduced as a baseline.\n\nIn any case, we believe we have indeed very strong evidence to support our claims that SMC improves the sample efficiency of the model free proposal (section 5.2 experiments were done with 20 seeds following best practices from Henderson 2017 and Colas 2018, which is greatly superior to what is usually done in the field).\n\n2) “The optimism bias stems from targeting the posterior, and is not due to errors in modeling the transitions”\n\nAre your referring to “exact inference in the graphical model produces an “optimistic” policy that assumes some degree of control over the system dynamics.” - Levine 2018, section 5.4?\nIn our case, the model is NOT trained jointly with the policy (only from buffer data), so the policy does not assume any control on the system’s dynamics, thus our posterior is not overly optimistic.\n", "The optimism bias stems from targeting the posterior, and is not due to errors in modeling the transitions.", "The authors formulate planning as sampling from an intractable distribution motivated by control-as-inference, propose to approximately sample from the distribution using a learned model of the environment and SMC, then evaluate their approach on 3 Mujoco tasks. They claim that their method compares favorably to model-free SAC and to CEM and random shooting (RS) planning with model-based RL.\n\nThis is an interesting idea and an important problem, but there appear to be several inconsistencies in the proposed algorithm and the experimental results do not provide compelling support for the algorithm. In particular,\n\nLevine 2018 explains that with stochastic transitions, computing the posterior leads to overly optimistic behavior because the transition dynamics are not enforced, whereas the variational bound explicitly enforces that. Is that an issue here?\n\nThe value function estimated in SAC is V^\\pi the value function of the current policy. The value function needed in Sec 3.2 is a different value function. Can the authors clarify on this discrepancy?\n\nThe SMC procedure in Alg 1 appears to be incorrect. It multiplies the weights by exp(V_{t+1}) before resampling. This needs to be accounted for by setting the weights to exp(-V_{t+1}) instead of uniform. See for example auxiliary particle filters.\n\nThe experimental section could be significantly improved by addressing the following points: \n* How was the planning horizon h chosen? Is the method sensitive to this choice? What is the model accuracy?\n* Does CEM use a value function? If not, it seems like a reasonable baseline to consider CEM w/ a value function to summarize the values beyond the planning horizon. This will evaluate whether SMC or including the value function is important. \n* Comparing to state-of-the-art model-based RL (e.g., one of Chua et al. 2018, Kurutach et al. 2018, Buckman et al. 2018). \n* How were the task # of steps chosen? They seem arbitrary. What is the performance at 1million and 5million steps?\n* Was SAC retuned for this small number of samples/steps?\n* Clarify where the error bars come from in Fig 5.2 in the caption.\nAt the moment, SMCP is within the error bars of a baseline method.\n\nComments:\n\nIn the abstract, the authors claim that the major challenges in planning are: 1) model compounding errors in roll-outs and 2) the exponential search space. Their method only attempts to address 2), is that correct? If so, can the authors state that explicitly.\n\nRecent papers (Chua et al. 2018, Kurutach et al. 2018, Buckman et al. 2018, Ha and Schmidhuber 2018) all show promising model-based results on continuous state/action tasks. These should be mentioned in the intro.\n\nThe connection between Gu et al.'s work on SMC and SAC was unclear in the intro, can the authors clarify?\n\nFor consistency, ensure that sums go to T instead of \\infty.\n\nI found the discussion of SAC at the end of Sec 2.1 confusing. As I understand SAC, it does try to approximate the gradient of the variational bound directly. Can the authors clarify what they mean?\n\nAt the end of Sec 2.2, the authors claim that the tackle the particle degeneracy issue (a potentially serious issue) by \"selecting the temperature of the resampling distribution to not be too low.\" I could not find further discussion of this anywhere in the paper or appendix.\n\nSec 3.2, mentions an action prior for the first time. Where does this come from?\n\nSec 3.3 derives updates assuming a perfect model, but we learn a model. What are the implications of this?\n\nPlease ensure the line #'s and the algorithm line #'s match.\n\nModel learning is not described in the main text though it is a key component of the algorithm. The appendix lacks details (e.g., what is the distribution used to model the next state?) and contradicts itself (e.g., one place says 3 layers and another says 2 layers).\n\nIn Sec 4.1, a major difference between MCTS and SMC is that MCTS runs serially, whereas SMC runs in parallel. This should be noted and then it's unclear whether SMC-Planning should really be thought of as the maximum entropy tree search equivalent of MCTS.\n\nIn Sec 4.1, the authors claim that Alpha-Go and SMCP learn proposals in similar ways. However, SMCP minimizes the KL in the reverse direction (from stated in the text). This is an important distinction.\n\nIn Sec 4.3, the authors note that Gu et al. learn the proposal with the reverse KL from SMCP. VSMC (Le et al. 2018, Naesseth et al. 2017, Maddison et al. 2017) is the analogous work to Gu et al. that learn the proposal using the same KL direction as SMCP. The authors should consider citing this work as it directly relates to their algorithm.\n\nIn Sec 4.3, the authors claim that their direction of minimizing KL is more appropriate for exploration. Gu et al. suggest the opposite in their work. Can the author's justify their claim?\n\nIn Sec 5.1, the authors provide an example of SMCP learning a multimodal policy. This is interesting, but can the authors explain when this will be helpful?\n\n====\n\n11/26\nAt this time, the authors have not responded to reviews. I have read the other reviews. Given the outstanding issues, I do not recommend acceptance.\n\n12/7\nAfter reading the author's response, I have increased my score. However, baselines that establish the claim that SMC improves planning which leads to improved control are missing (such as CEM + value function). Also, targeting the posterior introduces an optimism bias that is not dealt with or discussed.", "We would like to thank the reviewer for this very thorough review. We believe that these comments are making the paper clearer and stronger.\n\n1) “[...] the experimental results do not provide compelling support for the algorithm.“\n\nWe agree the initial results were not compelling in that regard. We have updated the results and we now believe the performance of our planning method appears clearly. We used 20 seeds and also added a significance test following guidelines by Colas et al 2018 in Appendix A.7. We furthermore added more experimental details in the Appendix A.5 and A.8\n\n2) “Levine 2018 explains that with stochastic transitions, computing the posterior leads to overly optimistic behavior because the transition dynamics are not enforced, whereas the variational bound explicitly enforces that. Is that an issue here?”\n\nOur model is trained by maximum likelihood as in Chua et al. 2018 only from data, separately from the policy and planning. Thus, the policy has no control over the system dynamics, hence the model is not encouraged to yield over-optimistic transitions. We have added details about the model training procedure in the experiments section and have update our pseudo-code for clarity.\n\n3) “The value function estimated in SAC is V^\\pi the value function of the current policy. The value function needed in Sec 3.2 is a different value function. Can the authors clarify on this discrepancy?”\n\nIndeed. However as we do not have access to the optimal value function, we use the current value function of SAC as a proxy. As the SAC-policy will converge to a policy closer to optimality, so will its value function. Therefore we think this is a sensible practical choice, and this is similar to what is done in actor-critic methods for instance.\n\n4) “The SMC procedure in Alg. 1 appears to be incorrect. It multiplies the weights by exp(V_{t+1}) before resampling. This needs to be accounted for by setting the weights to exp(-V_{t+1}) instead of uniform. See for example auxiliary particle filters.”\n\nYes, there was indeed an issue with the weight update that we have now fixed and it does indeed align with your intuition. \nTo be precise, we believe the weight update should be done pour multiplying with the previous weight by exp(r - log pi + V’ -log E_{s_t | a_t-1 s_t-1} exp V(s_t)).\nWe thought (wrongly) that the log-expectation-exp was equal to the normalization constant when normalizing the weights, thus redundant. However this normalization constant takes its expectation under the states the particles are in at time t rather than the transition dynamics as it should be done.\nBy fixing the update, we now believe we have the right formula, and this allows us to have an unweighted empirical estimate of the posterior.\nThis is indeed similar in spirit to the auxiliary particle filter, we thank you for the reference and for pointing out the issue, it helped us derive the right update formula.\n\n5) “How was the planning horizon h chosen? Is the method sensitive to this choice? What is the model accuracy?” + “At the end of Sec 2.2, the authors claim that the tackle the particle degeneracy issue (a potentially serious issue) by \"selecting the temperature of the resampling distribution to not be too low.\" I could not find further discussion of this anywhere in the paper or appendix.”\n\nWe did not do any extensive hyperparameter search in the beginning. We tried mostly temperatures in the range [1-10]. We checked the ESS while training to make sure we did not have any weight degeneracy issue. See A.8 for a plot of the ESS during training.\nWe have tried horizons from 5 to 50, and while the performance is pretty stable across this range of horizons, h~20 seems a good value to work with for Walker2d and HalfCheetah. Hopper was more challenging and we found out that typically shorter horizons worked marginally better. \nThe path degeneracy is indeed a very serious issue, and we definitely suffer from it even when tuning the temperature. While some modern smoothing algorithms like Particle Gibbs with Ancestor Sampling can alleviate it, our goal in this work is to introduce a new simple and motivated way of doing planning rather than obtaining the best performance possible.\n", "We would like to thank the reviewer for the encouraging comments and important references.\n\n“When it comes to your SMC algorithm you will suffer from path degeneracy. [...] However, this can easily be fixed via backward simulation [...]”\n\nYes, we thank you for the suggestion. Particle Gibbs with Ancestral Sampling had been also brought to our attention to tackle this issue, but we choose to keep it simple in this work to focus more on introducing the idea rather than on getting the best results.\n\n1) “There are no theoretical results on the properties of the proposed approach. However, given the large body of literature when it comes to the analysis of SMC methods I would expect that you can provide some results via the nice bridge that you have identified.”\n\nWe believe our method is grounded when p_model = p_env and we have access to the optimal value function.\nHowever in most RL settings, both these assumptions are violated and it lessens the impact of the analysis.\nA very interesting theoretical analysis we wish to make is to look if we can still provide some guarantees when the model and value function are approximately optimal, but a full theoretical study is still upcoming and out of scope of this paper.\n\n2) “Would this be possible to implement in a real-world setting with real-time requirements?”\n\nWe think it is possible if we replan every few steps instead of every step and keep a reasonable number of particles. Several methods bringing SMC methods to real-time systems exist. For instance, for embedded systems with real-time constraints, a FPGA implementation of SMC has been proposed (Ndeved et al, 2014, https://www.sciencedirect.com/science/article/pii/S1474667016429812).\nWe also believe that additionally to a good search algorithm, we need to learn good representations (eg if the input is an image) and plan in the latent space.\n\n3) “A very detailed question when it comes to Figure 5.2 (right-most plot), why is the performance of your method significantly degraded towards the end? It does recover indeed, but I still find this huge dip quite surprising.”\n\nIndeed. We had more time to investigate this during the review period and we realized that some of our jobs were killed around step 40k. We have since rerun all our experiments and closely monitored that no such thing happened again. We are now confident our updated results are much stronger and show with high confidence the real performance of our method.\n", "We would like to thank the reviewer for the correction and added additional experimental details to better understand the behaviour of the method.\n\n1) “[...] the SMC algorithm adopted [...] is the simplest and earliest SMC algorithm adopted.[...]. I do not see the modern parts of these algorithms.”\n\nThis is fair point, we corrected the sentence.\n\n2) “The experiment section reports the return, but it is unclear to me how the SMC algorithm in this case. For example, what is the effective sample size (ESS) in these settings?”\n\nIn this case, the SMC algorithm now clearly outperforms the SAC baseline as you can see in the updated version of the plot. Furthermore, we have added a new section in the Appendix A.8 describing the evolution of the ESS during training. While it is not very high, is is usually around 15% of the sample size, which we believe is reasonable so that we do not suffer heavily from weight degeneracy. \n\n3) “But it is unclear to me how the algorithm proposed is applicable in complex continuous tasks as claimed.”\n And “The experiment described seems to be a 2-dimensional set up. How does the algorithm perform with a high-dimensional planning problem?”\n\nYes, the 2d experiment is merely illustrative, the complex continuous tasks mentioned are illustrated with the experiments on Mujoco in subsection 2 of the experiments. In section 5.2, we have updated our performance results on the 3 classic Mujoco environments. Their respective state/action dimensions are: \nWalker2d-v2, state (17,), action (6,)\nHopper-v2, state (11,), action (3,)\nHalfCheetah-v2, state (17,), action (6,)\n\nStill, we have removed the mention of “high dimensional” as control tasks (ie Mujoco), while complex, are maybe not what the statistical community would call “high dimensional”. Also, vanilla particle filters are known to suffer from the curse of dimensionality, especially if the proposal is poor.\nA solution we leave to future work would be to do the planning in latent space, in that case our method could scale even with very high dimensional inputs.\n", "\n13) “Sec 3.2, mentions an action prior for the first time. Where does this come from?”\n\nThis action prior comes from the factorization of the HMM model in section 2.1 (it is typically considered constant or already included in the reward). We follow the notation of Levine 2018 that omits it for conciseness. We decided to add a footnote on eq 2.1 for clarity as well as a section in the Appendix A.2.\n\n14) “Sec 3.3 derives updates assuming a perfect model, but we learn a model. What are the implications of this?”\n\nThis is a necessary assumption that most planning algorithm (CEM, LQR…) make. Implications of this assumption are model compounding errors on the plan. To be more robust to model errors, it is typical to replan a each time step (Model Predictive Control) as we do. We added some clarification in this subsection.\n\n15) “Please ensure the line #'s and the algorithm line #'s match.”\n\nWe have updated the algorithm section. Now the lines should match and the algorithm in written in a more comprehensive way.\n\n16) “Does CEM use a value function? If not, it seems like a reasonable baseline to consider CEM w/ a value function to summarize the values beyond the planning horizon. This will evaluate whether SMC or including the value function is important.“\n\nWe think it is fair to compare to it as it is. Indeed, CEM is a method that has been used successfully in multiple settings e.g. Tetris and it is the default algorithm for planning in the deep RL community (e.g. Chua 2018) and is a baseline algorithm for us.\nMoreover, as we do not do any learning in the toy example, SMCP does not use a value function. Even then, we see that our algorithm can handle multimodality while CEM cannot.\n\n17) “Model learning is not described in the main text though it is a key component of the algorithm. The appendix lacks details and contradicts itself.” + “Comparing to state-of-the-art model-based RL.”\n\nWe corrected inconsistencies and added details. Note that we used a fairly standard probabilistic model (gaussian likelihood) and focus most of the space to describe our contribution: the planning algorithm, since any good model would work well.\nThese work are indeed relevant, but also complementary to ours. For example, Model Ensemble could potentially improve our results and those of the planning baselines. We added references to these papers in the text.\n\n18) “In Sec 5.1, the authors provide an example of SMCP learning a multimodal policy. This is interesting, but can the authors explain when this will be helpful?”\n\nRL algorithms are known to suffer from a mode seeking behavior and often only discover suboptimal solutions. We believe the ability to handle multimodality could help discovering new solutions to a task.\n\n", "\n6) “How were the task # of steps chosen? They seem arbitrary. What is the performance at 1million and 5million steps?”\n\nAs stated in the conclusion, our algorithm is expensive. Given that we train the model, the SAC networks (policy, value and Q functions), and we perform a full planning a each time step (MPC), training for 250k steps already takes a few days.\nWe decided to allocate our computing resources on producing more seeds rather than longer runs. It should be noted however that we do not expect our algorithm to keep outperforming SAC in the long run. We believe this is a behavior to be expected when planning with imperfect models, in the long run, the model-free method will find a good policy while the planning part will still suffer from model errors. We think this is also the case for humans; when confronted with a new situation we tend to plan, but as we become more familiar with it, our reflexes/habitus are more accurate.\nAs a solution, we could also learn when and how long to plan, but we believe this is out of scope for this work.\n\n\n7) “Was SAC retuned for this small number of samples/steps?”\n\nNo, it was not, we took the default values from the SAC paper. However we think it is fair since we use the exact same version of SAC for our proposal distribution and thus the only difference is from the planning algorithm.\n\n8) “Clarify where the error bars come from in Fig 5.2 in the caption.”\nYes we have added clarification. The error bars are 1 standard deviation from the mean with 20 seeds for each algorithm. This is the default setting for the confidence interval computation with the seaborn package.\n\n9) “In the abstract, the authors claim that the major challenges in planning are: 1) model compounding errors in roll-outs and 2) the exponential search space. Their method only attempts to address 2), is that correct? If so, can the authors state that explicitly.”\n\nYou are correct, we reformulate the introduction to clearly state the problem we are tackling: search algorithm. We do acknowledge that this is a very important issue -but that is not part of our contribution- in the related work and conclusion sections.\n\n10) “I found the discussion of SAC at the end of Sec 2.1 confusing. As I understand SAC, it does try to approximate the gradient of the variational bound directly. Can the authors clarify what they mean?” \n\nWe clarified the discussion. We think the distinction is mostly that a policy gradient algorithm would use a Monte Carlo return while SAC uses soft value functions and the policy is taken to be the Boltzmann distribution over the soft-Q values. This discussion was inspired by Section 4.2 of Levine (2018).\n\n11) “The connection between Gu et al.'s work on SMC and SAC was unclear in the intro, can the authors clarify?”\n\nWe think this discussion is actually more adapted for the related work section. There, we have now clarified the connection between Gu’s work on SMC and SAC. \n\n12) “In Sec 4.1, a major difference between MCTS and SMC is that MCTS runs serially, whereas SMC runs in parallel. This should be noted and then it's unclear whether SMC-Planning should really be thought of as the maximum entropy tree search equivalent of MCTS.” + “In Sec 4.1, the authors claim that Alpha-Go and SMCP learn proposals in similar ways. However, SMCP minimizes the KL in the reverse direction (from stated in the text). This is an important distinction.” + “In Sec 4.3, the authors note that Gu et al. learn the proposal with the reverse KL from SMCP. VSMC (Le et al. 2018, Naesseth et al. 2017, Maddison et al. 2017) is the analogous work to Gu et al. that learn the proposal using the same KL direction as SMCP. The authors should consider citing this work as it directly relates to their algorithm.” + “In Sec 4.3, the authors claim that their direction of minimizing KL is more appropriate for exploration. Gu et al. suggest the opposite in their work \n\nThe reviewer correctly pointed out some inconsistencies and vagueness in the related work section. We decided to rewrite it concisely and only focus on pointing toward relevant work to ours. \n", "We would like first to thank all reviewers for their work. We did a major revision of the paper based on the issues pointed out. We believe this current form is now much clearer and stronger and addresses the points raised by the reviewers.\n\nOutline of the revisions:\n- Simplified the abstract and clarified the introduction.\n- Fixed small typos and inaccuracies in section 2 (Background).\n- We reworked section 3.3 and 3.4 (SMCP) and fixed an issue in the weight update.\n- We added new strong and significant experimental results on Mujoco.\n- We reworked and wrote a more comprehensive section 5 (Related work) and discussed relevant papers, such as the ones pointed out by the reviewers.\n- Appendices: Included additional details and experimental figures.\n", "This paper proposes a sequential Monte Carlo Planning algorithm that depicts planning as an inference problem solved by SMC. The problem is interesting and the paper has a nice description of the related work. In terms of the connection between the the problem and Bayesian filtering as well as smoothing, the paper has novelty there. But it is unclear to me how the algorithm proposed is applicable in complex continuous tasks as claimed.\n\nIn the introduction, the authors wrote that \"We design a new algorithm, Sequential Monte Carlo Planning (SMCP), by leveraging modern methods in Sequential Monte Carlo (SMC), Bayesian smoothing, and control as inference\". From my understanding, the SMC algorithm adopted is the bootstrap particle which is the simplest and earliest SMC algorithm adopted. The Bayesian smoothing algorithm described is also standard. I do not see the modern parts of these algorithms.\n\nThe experiment section reports the return, but it is unclear to me how the SMC algorithm in this case. For example, what is the effective sample size (ESS) in these settings?\n\nThe experiment described seems to be a 2-dimensional set up. How does the algorithm perform with a high-dimensional planning problem?\n\n", "Sequential Monte Carlo (SMC) has since its inception some 25 years ago proved to be a powerful and generally applicable tool. The authors of this paper continue this development in a very interesting and natural way by showing how SMC can be used to solve challenging planning problems. This is a enabled by reformulating the planning problem as an inference problem via the recent trend referred to as \"control as inference\". While there is unfortunately no real world experiments, the simulations clearly illustrate the potential of the approach.\nWhile the idea of viewing control as inference is far from new the idea of using SMC in this context is clearly novel as far as I can see. Well, there has been some work along the same general topic before, see e.g.\nAndrieu, C., Doucet, A., Singh, S.S., and Tadic, V.B. (2004). Particle methods for change detection, system identification, and contol. Proceedings of the IEEE, 92(3), 423–438.\nHowever, the particular construction proposed in this paper is refreshingly novel and interesting. Hence, I view the specific idea put fourth in this paper as highly novel. The general idea of viewing control as inference goes far back and there are very nice dual relationships between LQG and the Kalman filter established and exploited long time ago.\n\nThe authors interprets \"control as inference\" as viewing the planning problem as a simulation exercise where we aim to approximate the distribution of optimal future trajectories. A bit more specifically, the SMC-based planning proposed in the paper stochastically explores the most promising trajectories in the tree and randomly removes (via the resampling operation) the less promising branches. Importantly there are convergence guarantees via the use of SMC. The idea is significant in that it opens up for the use of the by now strong SMC body of methods and analysis when it comes to challenging and intractable planning problems. I foresee many interesting developments to follow in the direction layed out by this paper. \n\nWhen it comes to your SMC algorithm you will suffer from path degeneracy (as all SMC algorithms does, see e.g. Figure 1 in https://arxiv.org/pdf/1307.3180.pdf) and if h is large I think this can be a problem for you. However, this can easily be fixed via backward simulation. For an overview of backward simulation see \nLindsten, F. and Schon, T. \"Backward simulation methods for Monte Carlo statistical inference\". Foundations and Trends in Machine Learning, 6(1):1-143, 2013.\n\nI am positive to this paper (clearly reveled by my score as well), but there are of course a few issues as well:\n1. There are no theoretical results on the properties of the proposed approach. However, given the large body of literature when it comes to the analysis of SMC methods I would expect that you can provide some results via the nice bridge that you have identified.\n2. Would this be possible to implement in a real-world setting with real-time requirements?\n3. A very detailed question when it comes to Figure 5.2 (right-most plot), why is the performance of your method significantly degraded towards the end? It does recover indeed, but I still find this huge dip quite surprising.\n\nMinor details:\n* The initial references when it comes to SMC are wrong. The first papers are:\nN.J. Gordon, D. Salmond and A.F.M. Smith, Novel approach to nonlinear/non-Gaussian Bayesian state estimation, IEE Proc. F, 1993\nL. Stewart, P. McCarty, The use of Bayesian Belief Networks to fuse continuous and discrete information for target recognition and discrete information for target recognition, tracking, and situation assessment, in Proc. SPIE Signal Processing, Sensor Fusion and Target Recognition,, vol. 1699, pp. 177-185, 1992.\n G. Kitagawa, Monte Carlo filter and smoother for non-Gaussian nonlinear state-space models, JCGS, 1996 \n* When it comes to the topic of learning a good proposal for SMC with the use of variational inference the authors provide a reference to Gu et al. (2015) which is indeed interesting and relevant in this respect. However, on this hot and interesting topic there has recently been several related papers published and I would like to mention:\nC. A. Naesseth, S. W. Linderman, R. Ranganath, D. M. Blei, Variational Sequential Monte Carlo. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, Lanzarote, Spain, April 2018.\nC. J. Maddison, D. Lawson, G. Tucker, N. Heess, M. Norouzi, A. Mnih, A. Doucet, and Y. Whye Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems, 2017.\nT. A. Le, M. Igl, T. Jin, T. Rainforth, and F. Wood. AutoEncoding Sequential Monte Carlo. arXiv:1705.10306, May 2017.\n\nI would like to end by saying that I really like your idea and the way in which you have developed it. I have a feeling that this will inspire quite a lot of work in this direction." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "BJebbLfA14", "SJgSEm1yg4", "iclr_2019_ByetGn0cYX", "S1xzTol01E", "rJl3_Zc2y4", "HJejH5YDnm", "rJl3_Zc2y4", "BJlqzpD_14", "rylLxlrqR7", "Bkgqc0IOk4", "ByxFxKEqRQ", "iclr_2019_ByetGn0cYX", "SylcRv_DhQ", "SJgI4p6O2m", "HJejH5YDnm", "SylcRv_DhQ", "SylcRv_DhQ", "iclr_2019_ByetGn0cYX", "iclr_2019_ByetGn0cYX", "iclr_2019_ByetGn0cYX" ]
iclr_2019_Byey7n05FQ
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
We propose a "plan online and learn offline" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.
accepted-poster-papers
The paper makes novel explorations into how MPC and approximate-DP / value-function approaches, with value-fn ensembles to model value-fn uncertainty, can be effectively combined. The novelty lies in exploring their combination. The experiments are solid. The paper is clearly written. Open issues include overall novelty, and delineating the setting in which this method is appropriate. The reviewers and AC are in agreement on what is in the paper. The open question is whether the combination of the ideas is interesting. After further reviewing the paper and results. the AC believes that the overall combination of ideas and related evaluations that make a useful and promising contribution. As evidenced in some of the reviewer discussion, there is often a considerable schism in the community regarding what is considered fair to introduce in terms of prior knowledge, and blurred definitions regarding planning and control. The AC discounted some of the concerns of R2 that related more to discrete action settings and theoretical considerations; these often fail to translate to difficult problems in continuous action settings. The AC believes that R3 nicely articulates the issues of the paper that can be (and should be) addressed in the writing, i.e., to describe and motivate the settings that the proposed framework targets, as articulated in the reviews and ensuing discussion.
train
[ "rJlrQ6CZxE", "SJxZOqMtnQ", "HklU137bCQ", "r1grAPeCa7", "Syl5JztsaX", "rJxWryti67", "Byen-JFsa7", "SkeXEnOipQ", "rJewsWF937", "BJgIdkoD2Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers,\n\nThank you once again for taking time to review our paper. Please let us know if we can answer any additional questions about the work, or if the answers to any of your questions require further discussion. If the responses were satisfactory, we kindly request that the reviewers adjust the rating appropriately. Thank you once again and we look forward to additional discussions about the work!", "In this paper, the authors propose POLO, a reinforcement learning algorithm which has access to the model of the environment and performs RL to mitigate the planning cost. For the planning, POLO uses the known model of the environment up to a fixed horizon H and then use an approximated value function in the leaf nodes. This way, instead of planning for an infinite horizon, the planning is factored to a shorter horizon, resulting in lower computation cost.\n\nThe novelty and motivation behind this approach is limited. Similar or even more general approach for discrete action space is introduced in \"Sample-Efficient Deep RL with Generative Adversarial Tree Search\" where they also learn the model of the environment and additionally consider the error due to the model estimation. There is also a clear motivation in the mentioned paper while I could not find a convincing one for the current paper. \nPutting the novel limitation aside, both of these paper, the current paper, and the paper I mentioned, suffer from very lose estimation bounds. Both of these works bound somewhat similar (not the same) things via L_inf error of value function which in practice does not necessarily result in useful or insightful upper bounds (distribution dependent bound is desired). Moreover, with the assumption of knowing the environment model, the implication of the current work is significantly limited.\n\nThe authors do a good job of writing the paper and the paper is clear which is appreciatable.\n\nIn equation 6 the authors use log-sum-exp and claim it corresponds to UCB, but they do not provide any evidence to support their claim. \n\nIn addition, the Bayesian linear regression in the tabular setting is firstly proposed in Generalization and Exploration via Randomized Value Functions and beyond tabular setting (the setting in the current paper) was proposed in Efficient Exploration through Bayesian Deep Q-Networks. \n\nThe claims in this paper are not strong enough and the empirical study does not strongly support or provide sufficient insight. For example experiments in section 3.2 does not provide much insight beyond common knowledge.\n\nWhile bridging the gap between model based and model free approaches in RL are significantly important research directions in RL, I do not find the current draft significant enough to shed sufficient light into this topic.\n\n\n\n", "Thank you to everyone for the detailed reviews and the authors for their detailed responses.\n\nNow would be a great time to hear from the reviewers as to whether their concerns\nhave been addressed, and if they wish to make any score adjustments.\n\nThanks in advance for this additional input.\n-- area chair\n", "We thank all the reviewers and the area chair for taking time to read our paper and providing feedback. The summary of our responses to common questions raised by the reviewers is below. We look forward to continued discussions to address any additional questions.\n\n>>> Source of dynamics model\n\nIn this work, we assume that we have access to the ground truth dynamics model. We *do not* believe that this is an unreasonable assumption, especially for our motivating problems. Good models based on knowledge of physics or through learning (system identification) are available in most engineering and robotics settings. Indeed, most successful robotics results have been through use of models or simulators (Boston Dynamics, Honda, OpenAI). The work is also directly relevant for fields where dynamics models are available (e.g. character animation in graphics) and simulation to reality transfer, which is gaining a lot of interest in robotic learning.\n\nWe also emphasize that knowing the dynamics does not make the problem trivial, and does not imply that one can simply “pre-solve” the MDP and deploy the solution. Some aspects of the MDP may be revealed only at deployment time, such as the states where we may want to concentrate the approximation power, or the reward function may be revealed only at deployment time (robot knows physics but does not know which task to solve). We thus feel that algorithms for real-time action selection is an important component to enable robots to behave competently in dynamic environments.\n\n>>> Novelty\n\nPOLO combines three threads of work into one coherent and elegant algorithm that produces impressive results. All reviewers have pointed out and noted that the motivation and presentation of the algorithm is clear and neat, and the results are impressive. While it may be easy to postulate that bringing together these threads of work is important, the specifics of how to do this to produce robust algorithms with impressive results is highly non-trivial and far from obvious. We feel that the quality of results should be taken into account when assessing the novelty. Indeed, one could argue that landmark results like AlphaGo and AlphaZero do not make deep contributions to any sub-field of RL/ADP, but it remains one of the most impressive feats due to bringing together different algorithmic sub-components and showing impressive results. We also note that the component of planning/MPC to explore, which we demonstrate in the maze example, has not been explored in continuous control.\n\n\nWe hope that the above clarifications also help to resolve other questions that were raised. In particular, our goal is *not* to bridge model-free and model-based RL methods; nor is to provide strong performance bounds for MPC. We make no claims about the former, and we merely use the latter as a motivation to develop a practical algorithm. While these are very important questions, they are not our focus and beyond the scope of the current submission. We kindly request that our paper be evaluated on the basis of results for the problem setting we study, as opposed to insights to other problem domains/settings.\n", "Thank you for taking time to review our paper, and for your analysis and review. We address your two concerns as follows:\n\n=======================\nRegarding source of model\n=======================\nIn this work, we assume that we have access to the dynamics model of the environment. We do not believe that this is a severe limitation because reasonable dynamics models are known for a majority of complex engineered systems including robotics. Indeed, most success stories in robotics are through use of models/simulators. Notable examples include Boston Dynamics’ Atlas, Honda’s Asimo, and the recent in-hand manipulation results from OpenAI. There is also a growing body of work and interest in simulation to reality transfer in RL for robotics, and we believe that POLO would serve as a strong baseline method for this research direction. POLO is also complementary to fields like learning dynamics models and nonlinear system identification.\n\nFurthermore, we also want to emphasize that knowing the dynamics does not make the problem trivial. Certain aspects of the MDP may be revealed only at run-time, thereby ruling out the option of essentially pre-solving the MDP before deployment time. For instance, we may not know the states to concentrate the approximation power of policy search or dynamic programming methods till deployment time. The reward function may also be revealed only at deployment time (robot knows physics, but does not know what task to do till the human tells it). Thus, having algorithms that can compute good actions at run-time is critical for a variety of settings, and we show in our results that POLO outperforms MPC.\n\n=======================\nComparisons to model-free RL\n=======================\nModel-free RL does not assume explicit knowledge of the dynamics, which is certainly a weaker assumption that in the POLO case. However, model-free RL has predominantly been demonstrated only in simulated environments where a model is available by definition (e.g. AlphaGo, Atari, MuJoCo). We believe that POLO would be an important contribution for researchers studying simulation to reality transfer, since it is orders of magnitude more efficient than running model-free RL in simulators. We have updated the paper to reflect this comparison more accurately.\n\n=======================\nSignificance and novelty\n=======================\nPOLO combines three important and deep threads of work: MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads of work, into a simple and elegant algorithm, as opposed to making a contribution to any one of the streams. We believe that combining the threads of work into a practical algorithm that produces impressive results to be important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. While the combination of MPC and value function may appear seemingly straightforward, it has not been found effective in continuous control in the past. For example, Zhong et al. study the setting of learning value function to help MPC and found the contribution of the value function to be minimal in their settings. We also emphasize that combining MPC and uncertainty quantification to do efficient and targeted exploration for continuous control has not been studied in the past.\n\n=======================\nSummary\n=======================\nTo summarize, while each component of POLO is well studied, combining them into a practical algorithm that produces impressive results is far from obvious. Combining the different components allows POLO to synthesize and learn competent behaviors on the fly for high dimensional systems. While we assume to know the dynamics model, this is not an outlandish assumption given the prevalence of complex model based robotic control in the real world, and the growing body of work in learning dynamics models, intuitive physics, and simulation to reality transfer.\n", "=======================\nRelated Works\n=======================\nThank you for pointing out the GATS paper, we have included a citation to it in our updated submission. As discussed earlier, the broad idea of combining planning and value function learning is not new. However, intuitions and lessons learned from discrete settings rarely transfer to continuous domains. For instance, global value or Q learning methods have not produced great results in continuous control with high-dimensional action spaces, while DQN performs very well in Atari which has a small number of discrete actions. Similarly, very different planning approaches are used in discrete action settings (e.g. UC-Trees) and continuous robotics problems (e.g. iLQG, PI^2, RRT). We emphasize that in the continuous control settings, we can synthesize controllers orders of magnitude more efficient than currently used approaches like PPO in the OpenAI dexterous hand work.\n\nThanks for pointing out the other papers studying Bayesian linear regression, we have included citations to those as well. We would like to emphasize that the computational view of Bayesian regression is not the contribution of this work. Rather, we use it as a means to perform uncertainty estimation and drive exploration in the POLO framework.\n\n=======================\nAnswers to other questions\n=======================\n- Regarding equation 6, we actually *do not* claim that our approach corresponds to UCB. Rather, we only say that log-sum-exp is a risk seeking objective and corresponds to optimism in the face of uncertainty, and this broad heuristic has been used successfully in other works.\n- Regarding Lemma 2, this is not a primary contribution of our work, and is fairly elementary. We use it primarily as a motivation for the practical algorithm we develop. We agree that the L_inf norm bounds are loose and tighter bounds would be great, but that is orthogonal to the main points of this paper.\n\n=======================\nSummary\n=======================\nIn summary, we have presented an elegant framework and algorithm that offers tangible benefits in the space of continuous control. This enables solutions to complex control problems orders of magnitude more efficiently than currently used techniques. The work should be evaluated based on the clean presentation and strong empirical results as opposed to weak connections to problems and bounds we do not focus on.", "Thank you for taking time to review our paper and for the feedback. We address your concerns below, and hope that our clarifications would help appreciate the work better. We look forward to continued fruitful discussions.\n\n=======================\nSignificance & Novelty\n=======================\nPOLO combines three important and deep threads of work: MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads of work as opposed to making a contribution to any one. We believe that combining them into a simple and elegant algorithm that produces impressive results is important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. While the combination of MPC and value function may seem straightforward, it has not found wide applicability in continuous control settings in the past. For example, Zhong et al. study the setting of learning value function to help MPC and found the contribution of the value function to be minimal in their settings. We also emphasize that combining MPC and uncertainty quantification to do efficient and targeted exploration has not been studied in the past in continuous control settings.\n\nOur empirical study attempts to isolate individual benefits enabled by each component in the POLO framework. Firstly, we have clearly demonstrated that learned value functions can support short horizon MPC. This has not been explored extensively in controls applications, and most MPC works do not consider learning a value function using the interaction data. Secondly, we demonstrate the utility of uncertainty quantification and MPC for exploration, through the maze example. Further, we demonstrate that MPC accelerates value function learning. While individual components may have been suggested before (Bellman himself suggests using prior experience to reduce planning computation), we present all the benefits in one elegant framework that actually achieves very strong empirical results in practice as noted by other reviewers.\n\n=======================\nKnown dynamics model\n=======================\nFirst and foremost, we emphasize that in the known dynamics setting our algorithm significantly outperforms model-free RL methods like policy gradient. While model-free RL obviously does not require access to a model, the overwhelming majority of results in RL are in simulated environments (e.g. AlphaGo, Atari etc.) where a model is available by design. Furthermore, the majority of successful results in robotics are also through model-based methods (eg Boston Dynamics' Atlas, Honda's Asimo, OpenAI's dexterous hands). Thus, one can interpret POLO as a very strong model-based baseline that model-free RL algorithms can strive to compete with, or as a powerful vehicle with direct applicability for simulation to reality transfer, which is a topic of immense interest in robot learning.\n\nFurthermore, we wish to point out that knowing the dynamics does not make the problem trivial. Certain aspects of the MDP may be revealed only at run-time, thereby ruling out the possibility of pre-solving the MDP. For instance, we may not know the states to concentrate the approximation power of policy search or dynamic programming methods till deployment time. The reward function may also be revealed only at deployment time (robot knows physics, but does not know what task to do till the human tells it). Thus, having algorithms that can compute good actions at run-time is critical for a variety of settings, and we show in our results that POLO outperforms MPC.\n\nFinally, we wish to point out that we make explicit the assumption of knowing the dynamics model, and do not even attempt to bridge model-free and model-based RL methods (as used in the connotation of recent papers). We feel that it is important to not judge the work on the basis of a problem we are not attempting to solve.\n", "Thank you for taking time to review our paper and for the constructive feedback. We greatly appreciate the comment that you enjoyed the exposition, assertions, and results in the paper. We look forward to continued fruitful discussions!\n\n==================\nProblem Setting\n==================\nThe agent knows the MDP dynamics, but the MDP can be very complex with some information about the MDP revealed only at deployment time. Hence, it is not feasible in general to “pre-solve” the MDP and simply deploy the solution. For instance, we may know the state distribution only at deployment time and hence not know where to concentrate the approximation power in policy gradient or dynamic programming methods. Also, the reward function may be revealed only at deployment time (the robot knows physics but doesn’t know which task to do until human command). This is the general premise of real-time MPC which has enjoyed tremendous success in controlling complex systems in engineering and robotics. At the same time, we note that if there is a possibility to pre-solve the MDP before deployment, POLO can be used for this purpose as well and our experiments show that POLO is more efficient than fitted value iteration.\n\n=======================\nSignificance and novelty\n=======================\nFirst and foremost, we emphasize that POLO produces very impressive results for hard continuous control tasks as noted by all the reviewers. POLO requires 1 CPU hour as opposed to 500 CPU hours reported by OpenAI (our numbers with PG are similar as well, and we will include these with the final paper). While model-free RL obviously does not require access to a model, the overwhelming majority of results in RL (e.g. AlphaGo, Atari, MuJoCo) are in simulated environments where a model is available by design. Model based methods have also been very successful in robotics (e.g. Boston Dynamics’ Atlas, Honda’s Asimo, OpenAI’s dexterous hands). Thus, we believe that knowing the dynamics model is not a severe limitation. One can interpret POLO as a very strong model-based baseline that model-free RL algorithms can strive to compete with, or as a powerful vehicle with direct applicability for simulation to reality transfer, which is a topic of immense interest in robot learning.\n\nPOLO combines three important and deep threads of work: MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads as opposed to making a contribution to any one. We believe that combining these threads of work into a simple and elegant algorithm that produces impressive results to be important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. Furthermore, combining MPC and uncertainty quantification to do efficient and targeted exploration has not been explored in the past in continuous control.\n\n=======================\nReg. alternate approach\n=======================\nYou are indeed correct that the core question is about action selection with bounded resources at run-time. In this setting, using any RL/DP algorithm on 7 cores, it is natural to focus the search process around the current state of interest due to limited resources. Thus, the suggested approach reduces to MPC -- 7 cores perform local rollouts which are then combined by the final core in some way -- either non-parametric blending with exponentiated costs (MPPI), a fitted form of iLQG, or some alternative. We show in our results that POLO outperforms trajectory centric RL which is synonymous with MPC.\n\n=======================\nAdditional comments\n=======================\nWe will include additional discussion about the following suggested components in the final version: (a) trajectory optimization vs MPC; (b) H-step Bellman backups; (c) error bars for the plots.\nWe agree that trajectory optimization has broader connotations than MPC. In this work, we used it in the context of real-time trajectory optimization which is synonymous with MPC. We will clarify the distinctions in the paper.\nWe also emphasize that Lemma 2 is not a primary contribution of the paper -- it primarily serves as a motivation for the algorithm we develop. We agree that prior work should have Lemma 2, since it is fairly elementary, and will include additional citations if we find the appropriate sources.", "This paper was a joy to read. The description and motivation of the POLO framework was clear, smart, and sensible. The fundamental idea is to explore the interplay between value-function estimation and model-predictive control and demonstrate how they benefit one another. None of these ideas is fundamentally new, but the descriptions and their combination is very nice.\n\nAs I finished the paper, though, I was left with a lingering lack of understanding of the exact problem setting that is being addressed. The name is cute but didn't help clarify. As I understand it:\n- we have a correct dynamics model (I'm assuming that's what \"nominal dynamics model\" means) and a good trajectory optimization algorithm\n- the agent has limited online cognitive capacity\n- there is no opportunity for offline computation\nIf offline computation time were available, then we could run this algorithm (or your favorite other RL algorithm) in the agent's head before taking any actions in the actual world. That does not seem to be the setting here, although it does seem to me that you might be able to show that POLO is a good algorithm for finding a value function, offline, with no actual interaction with the world.\n\nSo, fundamentally, this paper is about action under computational time constraints. One strategy would be for the robot to use 7 of its cores to run your favorite approximate DP / RL algorithm in parallel with 1 core that's used for action selection. Why is that worse than your algorithm 1?\n\nSetting this question aside, I had some other comments:\n- It is better *not* to use \"trajectory optimization\" and \"model-predictive control\" interchangeably. I can use traj opt in other circumstances (e.g. with open loop trajectory following) and could use other planners for MPC.\n- Some version of lemma 2 probably (almost certainly) already exists somewhere in the literature; I'm sorry, though, that I can't point you to a concrete reference.\n- The argument about MPC letting us approximate H Bellman backups is plausible, but seems somewhat subtle; it would be good to elaborate it in some more detail.\n- The set of assertions and experiments is very nice.\n- Why are no variances shown in figure 3? Why does performance seem to degrade after a certain horizon.\n\nThis paper doesn't seem really to be about learning representations. I don't know if that's important to the ICLR decision-making.", "This paper proposes to combine fitted value iteration with model predictive control (MPC) to speed up the learning process. The value iteration is the \"Learn offline\" subsystem while MPC is the \"Plan online\" subsystem. In addition, this paper also proposes an exploration technique that increases exploration if the multiple value function estimators disagree. The evaluation is complete and shows nice results.\n\nHowever, I did not rank this paper high for two reasons. First, it is not clear to me how the model is acquired in MPC. Does the method learn the model? Does the method linearize the dynamics and assume a linear model? I am not sure. I suspect that the method just uses the simulator as the model. If it is the case, the method is not so useful because for complexity systems, such as humanoids, we do not know the model. And the comparisons with model-free learning algorithms are not fair because the paper assumes that the model is given. If this is not the case, I suggest that a more detailed description of MPC should be presented in Section 2.3.\n\nSecond, the technical contributions are lean. The three main components, 1) fitted value iteration, 2) MPC and 3) exploration based on multiple value function estimates, are not novel. The combination of them seems straight forward. For example, the H-step Bellman update (Section 2.3) is a blend between Monte-Carlo method and Q learning. It seems to be similar to the TD(\\lambda) method. Thus, it is not surprising that it can accelerate convergence of value function.\n\nFor the above reasons, I would not recommend accepting this paper at this time." ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Byey7n05FQ", "iclr_2019_Byey7n05FQ", "iclr_2019_Byey7n05FQ", "iclr_2019_Byey7n05FQ", "BJgIdkoD2Q", "Byen-JFsa7", "SJxZOqMtnQ", "rJewsWF937", "iclr_2019_Byey7n05FQ", "iclr_2019_Byey7n05FQ" ]
iclr_2019_Byf5-30qFX
DHER: Hindsight Experience Replay for Dynamic Goals
Dealing with sparse rewards is one of the most important challenges in reinforcement learning (RL), especially when a goal is dynamic (e.g., to grasp a moving object). Hindsight experience replay (HER) has been shown an effective solution to handling sparse rewards with fixed goals. However, it does not account for dynamic goals in its vanilla form and, as a result, even degrades the performance of existing off-policy RL algorithms when the goal is changing over time. In this paper, we present Dynamic Hindsight Experience Replay (DHER), a novel approach for tasks with dynamic goals in the presence of sparse rewards. DHER automatically assembles successful experiences from two relevant failures and can be used to enhance an arbitrary off-policy RL algorithm when the tasks' goals are dynamic. We evaluate DHER on tasks of robotic manipulation and moving object tracking, and transfer the polices from simulation to physical robots. Extensive comparison and ablation studies demonstrate the superiority of our approach, showing that DHER is a crucial ingredient to enable RL to solve tasks with dynamic goals in manipulation and grid world domains.
accepted-poster-papers
This work proposes a method for extending hindsight experience replay to the setting where the goal is not fixed, but dynamic or moving. It proceeds by amending failed episodes by searching replay memory for a compatible trajectories from which to construct a trajectory that can be productively learned from. Reviewers were generally positive on the novelty and importance of the contribution. While noting its limitations, it was still felt that the key ideas could be useful and influential. The tasks considered are modifications of OpenAI robotics environments, adapted to the dynamic goal setting, as well as a 2D planar "snake" game. There were concerns about the strength of the baselines employed but reviewers seemed happy with the state of these post-revision. There were also concerns regarding clarity of presentation, particularly from AnonReviewer2, but significant progress was made on this front following discussions and revision. Despite remaining concerns over clarity I am convinced that this is an interesting problem setting worth studying and that the proposed method makes significant progress. The method has limitations with respect to the sorts of environments where we can reasonably expect it to work (where other aspects of the environment are relatively stable both within and across episodes), but there is lots of work in the literature, particularly where robotics is concerned, that focuses on exactly these kinds of environments. This submission is therefore highly relevant to current practice and by reviewers' accounts, generally well-executed in its post-revision form. I therefore recommend acceptance.
train
[ "r1e8PbJV0X", "B1lgRLesoX", "H1xtaCNcnX", "Sygu7-xF0Q", "rJlLKVtS07", "H1lI5-_BAm", "S1lN5tUBCQ", "SJekV4-SC7", "HJxjXfLXRQ", "B1eDAUxO6m", "ByxsM6u76X", "BJxeUuD76X", "BklxR_v7p7", "BkljCMD7a7", "r1eb4ZDQ67", "rJgHzRb5hX" ]
[ "author", "official_reviewer", "official_reviewer", "public", "author", "official_reviewer", "author", "official_reviewer", "public", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Thanks for your interest. The paper belongs to relevant topics in ICLR, for example, reinforcement learning or applications in robotics, or any other field. Please see https://iclr.cc/Conferences/2019/CallForPapers . \nTake HER (Andrychowicz et al., 2017) as an example, it was published in NIPS 2017. \n\nFor DHER, the time complexity of search process is O(1). In our implementation, we use two hash tables to store the trajectories of achieved goals and desired goals, respectively.", "In this paper, the authors extend the HER framework to deal with dynamical goals, i.e. goals that change over time.\nIn order to do so, they first need to learn a model of the dynamics of the goal, and then to select in the replay buffer experience reaching the expected value of the goal at the expected time. Empirical results are based on three (or four, see the appendix) experiments with a Mujoco UR10 simulated environment, and one experiment is successfully transfered to a real robot.\n\nOverall, the addressed problem is relevant (the question being how can you efficiently replay experience when the goal is dynamical?), the idea is original and the approach looks sound, but seems to suffer from a fundamental flaw (see below).\n\nDespite some merits, the paper mainly suffers from the fact that the implementation of the approach described above is not explained clearly at all.\nAmong other things, after reading the paper twice, it is still unclear to me:\n- how the agent learns of the goal motion (what substrate for such learning, what architecture, how many repetitions of the goal trajectory, how accurate is the learned model...)\n- how the output of this model is taken as input to infer the desired values of the goal in the future: shall the agent address the goal at the next time step or later in time, how does it search in practice in its replay buffer, etc.\n\nThese unclarities are partly due to unsufficient structuring of the \"methodology\" section of the paper, but also to unsufficient mastery of scientific english. At many points it is not easy to get what the authors mean, and the paper would definitely benefit from the help of an experienced scientific writer.\n\nNote that Figure 1 helps getting the overall idea, but another Figure showing an architecture diagram with the main model variables would help further.\n\nIn Figures 3a and 5, we can see that performance decreases. The explanation of the authors just before 4.3.1 seem to imply that there is a fundamental flaw in the algorithm, as this may happen with any other experiment. This is an important weakness of the approach.\n\nTo me, Section 4.5 about transfer to a real robot does not bring much, as the authors did nothing specific to favor this transfer. They just tried and it happens that it works, but I would like to see a discussion why it works, or that the authors show me with an ablation study that if they change something in their approach, it does not work any more.\n\nIn Section 4.6, the fact that DHER can outperform HER+ is weird: how can a learn model do better that a model given by hand, unless that given model is wrong? This needs further investigation and discussion.\n\nIn more details, a few further remarks:\n\nIn related work, twice: you should not replace an accurate enumeration of papers with \"and so on\".\n\np3: In contrary, => By contrast, \n\nwhich is the same to => same as\n\ncompare the above with the static goals => please rephrase\n\nIn Algorithm 1, line 26: this is not the algorithm A that you optimize, this is its critic network.\n\nline 15: you search for a trajectory that matches the desired goal. Do you take the first that matches? Do you take all that match, and select the \"best\" one? If yes, what is the criterion for being the best?\n\np5: we find such two failed => two such failed\n\nthat borrows from the Ej => please rephrase\n\nwe assign certain rules to the goals so that they accordingly move => very unclear. What rules? Specified how? Please give a formal description.\n\nFor defining the reward, you use s_{t+1} and g_{t+1}, why not s_t and g_t?\n\np6: the same cell as the food at a certain time step. Which time step? How do you choose?\n\nThe caption of Fig. 6 needs to be improved to be contratsed with Fig. 7.\n\np8: the performance of DQN and DHER is closed => close?\n\nDHER quickly acheive(s)\n\nBecause the law...environment. => This is not a sentence.\n\nMentioning in the appendix a further experiment (dy-sliding) which is not described in the paper is of little use.\n", "The authors propose an extension of hindsight replay to settings where the goal is moving. This consists in taking a failed episode and constructing a valid moving goal by searching prior experiences for a compatible goal trajectory. Results are shown on simulated robotic grasping tasks and a toy task introduced by the authors. Authors show improved results compared to other baselines. The authors also show a demonstration of transfering their policies to the real world.\n\nThe algorithm appears very specific and not applicable to all cases with dynamic goals. It would be good if the authors discussed when it can and cannot be applied. My understanding is it would be hard to apply this when the environment changes across episodes as there needs to be matching trajectories. It would also be hard to apply this for the same reason if there are dynamics chaging the environment (besides the goal). If the goal was following more complex dynamics like teleporting from one place it seems it would again be rather hard to adapt this. I am also wondering if for most practical cases one could construct a heuristic for making the goal trajectory a valid one (not necessarily relying on knowing exact dynamics) thus avoiding the matching step.\n\nThe literature review and the baselines do not appear to consider any other methods designed for dynamic goals. The paper seems to approach the dynamic goal problem as if it was a fresh problem. It would be good to have a better overview of this field and baselines that address this problem as it has certainly been studied in robotics, computer vision, and reinforcement learning. I find this paper hard to assess without a more appropriate context for this problem besides a recently proposed technique for sparse rewards that the authors might want to adapt to it. I find it difficult to believe that nobody has studied solutions to this problem and solutions specific to that don’t exist.\n\nThe writing is a bit repetitive at times and I do believe the algorithm can be more tersely summarized earlier in the paper. It’s difficult to get the full idea from the Algorithm block.\n\nOverall, I think the paper is borderline. There is several interesting ideas and a new dataset introduced, but I would like to be more convinced that the problems tackled are indeed as hard as the authors claim and to have a better literature review.\n", "As long as there are some patterns in the desired goal's motion, the proposed method should work. It means that it doesn't have to follow the same trajectory. Of course, when the pattern is simple, it becomes easy to learn.", "Thanks for your reply. We do not directly predict or model the future of desired goals. Given a lot of the past trajectories of desired goals, corresponding actions, and rewards (which can be seen as scores for taken actions), the RL agent will know which action is good and can lead to the desired goals in the future. \nThe RL algorithm constructs a policy (normally a neural network) to determine how to choose an action based on the current state. Therefore, we can conclude that the prediction of desired goals is automatically embedded in the learned RL policy. It is achieved from the process of maximizing the rewards by the RL agent.", "Thank you for the clarification above, it helps a lot. Now, to catch a moving object, you have to predict one of its future positions to meet it on time. You are not learning a model of the dynamics. So how do you predict this future position? Is it based on the idea that object trajectories are repeatable, i.e. the same object will perform the same trajectory many times?\n\nPlease forgive this naive way of putting questions, but it may help many readers beyond me.", "Thanks for your reply. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). In the example of a moving object that the agent must reach, the desired goals correspond to the positions of the moving object. The achieved goals correspond to the positions of the gripper, which is connected to a robotic arm. \nThe moving object (desired goals) is a dynamic goal. The gripper (achieved goals) is controlled by the RL agent and tries to reach the moving object.\n\nWe updated Figure 1 and Section 3.2 to add more details of the desired and achieved goals. In the paper, in Figure 1, we use a green curve to indicate the trajectory of achieved goals and a red curve to indicate the trajectory of desired goals.\nIn Figure 2, for the first three tasks, the positions of the (blocked) black gripper indicate achieved goals. The red object indicates desired goals. For the fourth task, the green circle (a snake) indicates achieved goals. The red object (food) indicates desired goals.\n\nIn our real robotic system, there are a gripper connected to a robotic arm and a blue block. When running a task, the positions of the gripper indicate achieved goals and the positions of the moving blue block indicate desired goals.\n", "I'm really sorry, but after reading Section 3 several times, I'm afraid I still don't understand exactly what the authors are doing.\n\nLet us take the example of a moving object that the agent must reach. In this case, I'm assuming \"acheived goals\" correspond to positions of the object, known a posteriori. But what do \"desired goals\" stand for? Is the agent trying to predict the trajectory of the object?\n\nIt seems that the other reviewers have no problem with that, so if anybody could explain the setup to me, I would be delighted...", "I feel this paper is very well suited for a robotics conference rather than ICLR, I see valid concerns from reviewers but these are general assumptions that needs to be taken by robotics people to make things work. The only thing, I am interested in the overall time-complexity of DHER process, which seems to be growing as the number of episodes increases", "Thanks for discussing the limitation of DHER. Similar to HER, we need to have the definition of goals and know the similarity metric between goals in order to construct “success” from failed experiences. We had provided how to use and define goals in Section 3.1 --- and we made addition revisions to make it more clear. See Sections 1 and 3.1 for the discussions. \n\nBecause we have the same multi-goal assumption as HER, we did not claim our method can be used for every case. However, it still can be applied to many domains if we know how to define the goals and if their trajectories intersect. \n\nFor a game, if its goals can be used as part of the observation and do not affect the environment dynamics, our algorithm will work. Regarding the Atari games, we did find that there is no game satisfying the multi-goal assumption. However, our approach can be potentially used for other games where we know the similarity of goals, for example, hunting for food in a Minecraft-like grid world mini-game. The Dy-Snake game in our work serves as a reference for which types of games our approach can benefit. \n\nThe algorithm is very natural for many manipulation tasks because we can access (sometimes noisy) object positions in manipulation. The starting point of this work is actually for manipulation controls. \n", "Thanks for your response and clarifications. I would like to comment on this point:\n\n\"1) We position the paper in the context of RL with sparse rewards. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The dynamic goal problem is extended from this setting, not all other cases. Please see paragraph 3 in Section 1 (Introduction) and paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions. \n2) We propose a new experience replay method. The proposed algorithm can be combined with any off-policy RL algorithms, similar to HER, as shown in Figure 1.\n3) The motivation of developing algorithms which can learn from unshaped reward signals is that it does not need domain-specific knowledge and is applicable in situations where we do not know what admissible behaviour may look like. The similar motivation is also mentioned in HER (Andrychowicz et al., 2017). We also added new experimental results about dense rewards. The results show DHER works better. See Figures 3 and 6.\n\nQ1: The algorithm appears very specific and not applicable to all cases with dynamic goals. …\nA1: Please see 1) and 3) above.\"\n\nI believe this kind of motivation as a principled approach to RL with sparse rewards and no domain knowledge is an overclaim. The HER algorithm is a heuristic one and to the best of my understanding requires a domain specific knowledge of how to set fake goals, which is natural in many settings such as grid worlds for example. The moving goal case described here requires even more domain specific knowledge and I am not convinced is truly “model-free” in most cases. To the best of my understanding the matching phase of your method requires a domain specific understanding of goal similarity. Is it possible to provide a dynamic goal example that is not just a simple and short trajectory in space and makes sense to be applied with DHER? Could the authors for example explain how the algorithm would be applicable in a case of an Atari style game where a goal would teleport or have long trajectories (non-trivial to match without a complex matching heuristic). It seems in this case (a) one would have to obtain precise coordinate positions of the goal (this would mean one can’t just solve the problem based on pure pixels and must rely on domain knowledge) and (b) the matching algorithm itself would need to be heavily crafted with domain specific knowledge. I think the method might be more specific than the authors claim and should be presented as such. \n", "Q4: an architecture diagram\nA4: We updated Figure 1.\n\nQ5: Figures 3a and 5 … performance decreases ...\nA5: One reason may be that it is a temporal drop and will recover later. Another reason may be that the policy trained with assembled experiences becomes overfitting to simple cases as such kind of experiences are assembled a lot. The overfitting to simple cases decreases overall performance. The similar pattern also appeared in other papers. See Pusing task in Fig 2 in HER (Andrychowicz et al., 2017). \n\nQ6: To me, Section 4.5 about transfer to a real robot does not bring much … \nA6: The experiments of transferring to a real robot mainly demonstrate dynamic goals are real-world problems and can be solved by our method. At the same time, it shows when DHER uses positions, it is robust to the real-world environment.\n\nQ7: In Section 4.6, the fact that DHER can outperform HER+ is weird … \nA7: It is indeed a little surprising. It shows DHER is very efficient in some simple environments. In a simple environment, such as Dy-Snake, DHER has better generation than HER+. The reason may be that HER+ uses only one way to modify a trajectory. However, DHER has different ways to create success trajectories because we can find different matching positions given a trajectory from the past experiences. The Dy-Snake environment is so simple that DHER is able to create a lot of success experience in a short time.\n\nQ8: In more details, a few further remarks ...\nA8: We polished the paper.\n\nQ9: in the appendix a further experiment (dy-sliding) … of little use…\nA9: We removed it. We added this before because our open source will contain this environment and our model also works on it successfully.\n\nQ10: In Algorithm 1, line 26: this is not the algorithm A that you optimize, this is its critic network.\nA10: Line 26 indicates a standard update for the RL algorithm A. It is similar to HER. Please see the last several lines of Algorithm 1 in HER (Andrychowicz et al., 2017).\nThe key process of DHER is from lines 13 to 23. We had added a marker at the end of Line 20. \n\nQ11: line 15: you search for a trajectory that matches the desired goal ...\nA11: We use a hash table to store trajectories. We search trajectories in the hash table and return the first that matches.\n\nQ12: we assign certain rules to the goals so that they accordingly move => very unclear...\nA12: The details are given in the next paragraph. See the second paragraph in Section 4.1. For different environments, the rules are slightly different.\n\nQ13: For defining the reward, you use s_{t+1} and g_{t+1}, why not s_t and g_t?\nA13: They are the same meaning and just corresponding to different timesteps. At time step t, after taking an action, the state turns to s_{t+1} and the goal turns to g_{t+1}. Thus the reward is defined based on s_{t+1} and g_{t+1}. \nSimilarly, if the time step is t - 1 (t > 1), the reward is defined based on s_{t} and g_{t}.\n\nQ14: p6: the same cell as the food at a certain time step. Which time step? How do you choose?\nA14: It means if the snake moves to the same cell as the food at any timestep, the game is over. We only set the maximum timestep for each episode.\n", "We thank the reviewer for the comments and have revised the paper accordingly. We believe the reviewer has some misunderstandings about our work. We make the following clarifications. \n1) For the dynamic of goals, our algorithm does not need to learn the dynamics. The algorithm creates new experiences through combining two failed experiences in which their goal trajectories have overlaps at any timestep. Please see paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions.\n2) Our algorithm is about experience replay. The input is the past experiences. The output is new assembled success experiences if exist. We updated Figure 1 to show how DHER works with a RL algorithm.\n3) Regarding the RL environments and the proposed algorithm and transfer solution, we would like to open all of them. All results can be reproduced. We believe the dynamic goal problem manipulation control is also interesting for other researchers. \n\n\nQ1: In order to do so, they first need to learn a model of the dynamics of the goal, and then to select in the replay buffer experience reaching the expected value of the goal at the expected time.\nA1: Please see S1.\n\nQ2: how the agent learns of the goal motion ...\nA2: Generally speaking, reinforcement learning learns a policy through trial and error. The reinforcement learning agent interacts with an environment and obtains rewards to indicate whether its action is good or not. \nIn our setting, the goal’s motion is a part of environment. This setting is quite normal in real world. See our introduction and HER (Andrychowicz et al., 2017). When a RL algorithm takes an action, it will automatically and latently take the knowledge of the goal’s motion into consideration. \nHowever, under this setting, after interacting with the environment for a long time, we still face the problem that we do not have success signals to guide a policy learning. The main difficulty then lies in how to efficiently use the past experiences in the replay buffer to construct the success signals, other than learn the motion of the goal. Our paper then provide a solution to solve the difficulty.\nThere are a lot of goal trajectories and they are different to each other. Taking Dy-Reach as an example, as shown in Figure 3(a), we followed openai gym’s training settings. There are 50 epoches in total and each epoch has 100 episodes, i.e,. 100 trajectories. The performance (success rate) of the learned model DDPG+DHER can achieve 0.8. If the velocity of the goal is slower, the performance can achieve 1.0.\n\nQ3: how the output of this model is taken as input to infer the desired values of the goal in the future: ...\nA3: Our model is a kind of experience reply method. The input of our model is the past trajectories. Most of them are failed. The output of our model is assembled experiences. The assembled experiences are success experiences. We followed the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The goals are represented by positions. The model searches the relevant experience according to the positions of goals. If the positions of two goals are overlapped (< tolerance 0.01) at anytime, then they are matched.", "Thank you for your insightful comments and feedback! \n\nQ1: baselines … shaped rewards…\nA1: We added shaped reward baselines. We use a natural distance related (dense) reward function to train the agent. Figures 3 and 6 in the paper show that the dense rewards do not work well for dynamic goals, though they help at the beginning of the learning.\n\nQ2: - It would be good to be more upfront about the limitations of the method …\nA2: We agree. In the revised paper, we provided more details about the limitations, including the goal assumption, the transfer requirements and so on. See Section 1 and 4.5 for more details.\n\nQ3: It would be interesting to see quantitative results for the simulated experiments in section 4.5. \nA3: Thanks for your valuable suggestion. In Section 4.5, with the accurate positions, we have 100% success rate for 5 trials.\n\nQ4: The performance of DHER on Dy-Reaching seems to degrade in later stages of training (Figures 3a and 5). Do you know what is causing it? DQN or DHER?\nA4: One reason may be that it is a temporal drop and will recover later. Another reason may be that the policy trained with assembled experiences becomes overfitting to simple cases as such kind of experiences are assembled a lot. The overfitting to simple cases decreases overall performance. The similar pattern also appeared in other papers. See Pusing task in Fig 2 in HER (Andrychowicz et al., 2017). \n", "We thank the reviewer for the comments, and we would like to clarify a few important misconceptions that the reviewer has regarding our work.\n1) We position the paper in the context of RL with sparse rewards. We follow the goal setting of UVFA (Schaul et al., 2015a) and HER (Andrychowicz et al., 2017). The dynamic goal problem is extended from this setting, not all other cases. Please see paragraph 3 in Section 1 (Introduction) and paragraph 1 in Section 3.1 (Dynamic goals) for more descriptions. \n2) We propose a new experience replay method. The proposed algorithm can be combined with any off-policy RL algorithms, similar to HER, as shown in Figure 1.\n3) The motivation of developing algorithms which can learn from unshaped reward signals is that it does not need domain-specific knowledge and is applicable in situations where we do not know what admissible behaviour may look like. The similar motivation is also mentioned in HER (Andrychowicz et al., 2017). We also added new experimental results about dense rewards. The results show DHER works better. See Figures 3 and 6.\n\nQ1: The algorithm appears very specific and not applicable to all cases with dynamic goals. …\nA1: Please see 1) and 3) above.\n\nQ2: I am also wondering if for most practical cases one could construct a heuristic for making the goal trajectory a valid one (not necessarily relying on knowing exact dynamics) thus avoiding the matching step.\nA2: It is a good idea to take domain heuristics into consideration. However, in our paper, we aim to construct a model-free method for dynamic goals to avoid the complexity of constructing goal trajectories. We agree that your idea worths a try in the future.\n\nQ3: The literature review and the baselines do not appear to consider any other methods designed for dynamic goals. …\nA3: We do not want to claim that the dynamic goal problem is a fresh problem. However, there is little work addressing dynamic goals in the sparse reward setting. As far as we know, there is no open-source RL environments for such problems. (OpenAI Gym Robotics uses fixed goals.) \n\nQ4: I find it difficult to believe that nobody has studied solutions to this problem and solutions specific to that don’t exist.\nA4: Our paper focuses on addressing dynamic goals with sparse rewards. This setting has not been addressed probably because it is difficult to learn. For example, the recently developed DDPG and HER failed in our tasks. Moreover, there is no open-source environments for the dynamic goals and sparse rewards, to the best of our knowledge.\n\nQ5: There is several interesting ideas and a new dataset introduced, but I would like to be more convinced that the problems tackled are indeed as hard as the authors claim and to have a better literature review.\nA5: Except for sparse rewards, we also added new experimental results about dense rewards for the dynamic goal setting. We have similar results. Similar to DDPG and DDPG+HER, DDPG(dense) does not work well in our tasks. For the simple Dy-Snake environment, DQN(dense) is better than DQN but not better than DQN+DHER. See Figures 3 and 6.\n", "This paper proposes a way of extending Hindsight Experience Replay (HER) to dynamic or moving goals. The proposed method (DHER) constructs new successful trajectories from pairs of failed trajectories where the goal accomplished at some point in the first trajectory happens to match the desired goal in the second trajectory. The method is demonstrated to work well in several simulated environments and some qualitative sim2real transfer results to a real robot are also provided.\n\nThe paper is well written and is mostly easy to follow. I liked the idea of combining parts of two trajectories and to the best of my knowledge it is new. It is a simple idea that seems to work well in practice. While DHER has some limitations I think the key ideas will lead to interesting future work.\n\nThe main shortcoming of the paper is that it does not consider other relevant baselines. For example, since the position of the goal is known, why not use a shaped reward as opposed to a sparse reward? The HER paper showed that using sparse rewards with HER can work better than shaped rewards. These findings may or may not transfer to the dynamic goal case so including a shaped reward baseline would make the paper stronger.\n\nSome questions and suggestions on how to improve the paper:\n- It would be good to be more upfront about the limitations of the method. For example, the results on a real robot probably require accurate localization of the gripper and cup. Making this work for precise manipulation will probably require end-to-end training from vision where it’s not obvious DHER would apply.\n- It would be interesting to see quantitative results for the simulated experiments in section 4.5. \n- The performance of DHER on Dy-Reaching seems to degrade in later stages of training (Figures 3a and 5). Do you know what is causing it? DQN or DHER?\n\nOverall, I think this a good paper." ]
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "HJxjXfLXRQ", "iclr_2019_Byf5-30qFX", "iclr_2019_Byf5-30qFX", "H1lI5-_BAm", "H1lI5-_BAm", "S1lN5tUBCQ", "SJekV4-SC7", "BJxeUuD76X", "iclr_2019_Byf5-30qFX", "ByxsM6u76X", "r1eb4ZDQ67", "B1lgRLesoX", "B1lgRLesoX", "rJgHzRb5hX", "H1xtaCNcnX", "iclr_2019_Byf5-30qFX" ]
iclr_2019_ByftGnR9KX
FlowQA: Grasping Flow in History for Conversational Machine Comprehension
Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.
accepted-poster-papers
Interesting and novel approach of modeling context (mainly external documents with information about the conversation content) for the conversational question answering task, demonstrating significant improvements on the newly released conversational QA datasets. The first version of the paper was weaker on motivation and lacked a clearer presentation of the approach as mentioned by the reviewers, but the paper was updated as explained in the responses to the reviewers. The ablation studies are useful in demonstration of the proposed FLOW approach. A question still remains after the reviews (this was not raised by the reviewers): How does the approach perform in comparison to the state of the art for the single question and answer tasks? If each question was asked in isolation, would it still be the best?
train
[ "rJx8PrR4CX", "SkglhHAEAm", "SJeOXH0VAm", "SkepZq6VR7", "SJx9ptTE07", "HJeLfagi3Q", "SyxpAA4c2Q", "SylB4hN5hQ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This comment is moved to be below the main response.", "Re: Some questions on the experiments\n\n1) Computational efficiency compared to single-turn MC: Without our alternating parallel processing structure, training time will be multiplied by the number of QA pairs in a dialog. After implementing this mechanism, training FlowQA takes roughly 1.5x to 2x of the time training a single-turn model in each epoch.\n\n2) Ablation on question-specific context representation: The features mentioned (em, g) are attention vectors obtained from the question. This is the first attention on the question (there are two attentions on the question, see Figure 4). If c is ablated, we are expecting the model to select an answer span from the context without seeing the context. In this case, the model would not work at all. The F1 scores for CoQA/QuAC without exact match feature (em), and attended question embedding (g) are reported below.\n\nFlowQA: 76.0 / 64.6\nFlowQA (-em): 75.4 / 62.3\nFlowQA (-g): 75.5 / 64.5\n\n3) Improvements from encoding N answer spans: We are using the same setting for marking the previous N-answers as Choi et al. [1] and Yatskar et al. [2]. We provide a comparison below. The improvement was the biggest (7.2 F1) when marking no previous answer (0-Ans), as FlowQA incorporates history through using the intermediate representation while BiDAF++ had no access. The improvement is less pronounced but still significant (4.0 F1) when marking many previous answers. \n (FlowQA vs. BiDAF++)\n0-Ans: 59.0 vs. 51.8\n1-Ans: 64.2 vs. 59.9\n2-Ans: 64.6 vs. 60.6\nAll-Ans: 64.6 vs. N/A (3-Ans: 59.5)\n\n4) Applying FLOW to other tasks: The Flow mechanism is essentially performing a large RNN update on a big memory state, which contains O(Nd) hidden units, N is the length of the passage/context and d is the hidden size per words. Due to the enormous hidden unit size, the big memory state can store all the details of the full passage/context and to operate on this large memory state. Because of the design of the Flow mechanism, we can operate on this enormous memory state efficiently. We believe the Flow mechanism can be useful for problems that require a large amount of memory, beyond the conversational MC and sequential semantic parsing. However, further investigation is needed to verify this claim.\n\n[1] Choi et al. QuAC: Question Answering in Context.\n[2] Yatskar et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC.", "Thank you for detailed suggestions and feedback. \n\nRe: Question about motivation and definition of Flow\nBased on your suggestions, we have made several changes to our paper.\n\nFirst, we expanded the beginning of the Flow concept (section 3.1) to make the motivation clearer. We added a new figure (Figure 2) that shows a real example where existing approaches failed to answer correctly. The figure illustrates the following: depending on the current topic of the conversation, the answer to the same question can differ significantly. We hence define the conversation flow to be a sequence of latent state in the dialog, where each latent state is what the conversation (up to this point) is about. Since the conversation is based on a passage, we consider each latent state to be a block of vector representations with the same number as the passage length (e.g., the representation may store which part of the context is being discussed right now). Hence our Flow mechanism is more like a latent movement on what the relevant parts of passage currently being discussed are and not an attention over the passage.\n\nSecond, to justify the motivation, we have added a visualization of the flow operation in Appendix A. Since the flow operation maintains a memory block (same size as the context length), we show where the memory update is most active (i.e., the hidden vector between each time step changes most significantly). We can see that the memory region corresponding to the current topics and events being discussed changes the most. This indicates that the model learns to use the flow operation to store information about parts of context being currently discussed. This makes the model easier to answer follow-up questions and hence leads to better performance.\n\nThird, we have added some analysis in Appendix B on dialogs where existing models failed but FlowQA succeeded. Most of them are ambiguous questions, i.e., with multiple valid answers to the question, but only one of which corresponds to the current conversation topic. For example, in an article about Susan Boyle singing for the pope, the question “What will Boyle sing?” can have several answers depending on what the circumstances are (in the main event, she will sing “How Great Thou Art”, but at the ending of the event, she will sing “farewell song”). Existing method sometimes gets confused and answer incorrectly.\n\nRe: Clarity in section 3\nThank you for your suggestions for improving the clarity of the paper. Originally, we put all the detail in Section 3 for completeness. We have now moved the parts from existing approaches to Appendix C. Computational efficiency is one of our main practical concerns since the naive implementation is really slow. Below are the experimental results on this issue.\n\nSpeedup over the naive implementation (in terms of time per epoch)\nCoQA: 8.1x\nQuAC: 4.2x\nThe prediction performance after each epoch is the same, so the time to complete the training is proportional to this speedup. Since this result is quite succinct, originally we only mentioned in the main text. We have now added this result to the experiment section.\n\nFigure 2 and 3 visualizes how the speedup shown above is achieved and how the Flow component is integrated into an existing single-turn model. \n\n[1] Choi et al. QuAC: Question Answering in Context.\n[2] Yatskar et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC.", "Thank you for the helpful comments and clarification questions. We have added visualization for the behavior of the Flow mechanism (Appendix A) and analyzed questions where FlowQA answered correctly while previous approaches failed (Appendix B).\n\nRe: Combining the conversation history with documents\nThe best performing baselines in the QuAC [1] and CoQA [2] papers indeed combine conversation history by marking previous answer locations in the evidence documents and/or concatenating questions. Effectively, these baselines reduce this problem to a regular MRC task by incorporating the conversation history in documents and questions. These baselines’ performance is compared with that of our proposed model ( > 7.2% improvements on CoQA, and > 4.0% on QuAC).\n\nRe: Question about using partial history\nOur model incorporates the conversation history in two ways: (1) marking the previous answer locations in the evidence document as in prior baselines. (2) incorporating implicit representations generated to answer the most recent question. For the marking in the document (1), our ablation study in Table 3 shows the result for feeding in 0, 1, 2, and the full history. For incorporating implicit representations (2), our model only takes the intermediate representation generated for the most recent question (although the representation of the most recent question is based on its previous representation). The ablation study for explicit marking suggests questions often do not have a long-range dialogue dependency (most questions are related to only the preceding one or two questions).\n\n[1] Choi et al. QuAC: Question Answering in Context.\n[2] Reddy et al. CoQA: A conversational question answering challenge.", "Thank you so much for your review. \n\nThe ablation study on the reasoning layers can be found below (we count the number of context integration layers). The numbers below are the F1 scores for CoQA / QuAC, respectively. We found our original hyperparameter (4 layers) was the most effective one. \n\n# Integration layers = 3: 75.5 / 64.2\n# Integration layers = 4: 76.0 / 64.6 (original result)\n# Integration layers = 5: 75.3 / 64.1", "The paper presents a new model FlowQA for conversation reading comprehension. Compared with the previous work on single-turn reading comprehension, the idea in this paper differs primarily in that it alternates between the context integration and the question flow in parallel. The parallelism enables the model to be trained 5 to 10 times faster. Then this process is formulated as layers of a neural network that are further stacked multiple times. Besides, the unanswerable question is predicted with additional trainable parameters. Empirical studies confirm FlowQA works well on a bunch of datasets. For example, it achieves new state-of-the-art results on two QA datasets, i.e., CoQA and QuAC, and outperforms the best models on all domains in SCONE. Ablation studies also indicates the importance of the concept Flow.\n\nAlthough the idea in the paper is straightforward (it is not difficult to derive the model based on the previous works), this work is by far the first that achieves nontrivial improvement over CoQA and QuAC. Hence I think it should be accepted.\n\nCan you conduct ablation studies on the number of Reasoning layers (Figure 3) in FlowQA? I am quite curious if a deeper/shallower model would help.", "The paper proposes a method to model the flow of context in multi-turn machine comprehension (MC) tasks. The proposed model achieves amazing improvements in the two recent conversational MC tasks as well as an instruction understanding task. I am very impressed by the improvements and the ablation test that actually shows the effectiveness of the FLOW mechanism they proposed.\n\nHowever, this paper has a lack of clarity (especially, Section 3) which makes it difficult to follow and easy to lose the major contribution points of the work. I summarized the weaknesses as follows:\n\n# lack of motivation and its validation\nThe paper should have more motivational questions at the beginning of why such flow information is necessary for the task. Authors already mentioned about some of it in Figure 1 and here: “such as phrases and facts in the context, for answering the previous questions, and hence provide additional clues on what the current conversation is revolving around”. However, the improvement of absolute scores in the Experiment section didn’t provide anything related to the motivation they mentioned. Have you actually found the real examples in the testing set that are correctly predicted by the FLOW model but not by the baseline? Are they actually referring to the “phrases and facts in the context”, “additional clues on what the current conversation is revolving around”? Another simple test authors can try is to show the attention between the context in a flow and question and see whether appropriate clues are actually activated given the question. \n\n# unclear definition of “flow”\nThe term “flow” is actually little over-toned in my opinion. Initially, I thought that flow is a sequence of latent information in a dialog (i.e., question-answer) but it turns to be a sequence of the context of the passage. The term “flow” is more likely a sequence of latent and hierarchical movement of the information in my opinion. What is your exact definition of “flow” here? Do you believe that the proposed architecture (i.e., RNN sequence of context) appropriately take into account that? RNN sequence of the passage context actually means your attention over the passage given the question in turn, right? If yes, it shouldn’t be called a flow. \n\n# Lack of clarity in Section 3\nDifferent points of contributions are mixed together in Section 3 by themselves or with other techniques proposed by others. For example, the authors mention the computational efficiency of their alternating structure in Figure 2 compared to sequential implementation. However, none of the experiment validates its efficiency. If the computational efficiency is not your major point, Figure 2 and 3 are actually unnecessary but rather they should be briefly mentioned in the implementation details in the later section. Also, are Figure 2 and 3 really necessary? \n\nSection 3.1 and 3.3.1 are indeed very difficult to parse: This is mainly because authors like to introduce a new concept of “flow” but actually, it’s nothing more than a thread of a context in dialog turns. This makes the whole points very hyped up and over-toned like proposing a new “concept”. Also, the section introduces so many new terms (“context integration”. “Flow”, “integration layers”, “conversational flow”, “integration-flow”) without clear definition and example. The name itself looks not intuitive to me, too. I highly recommend authors provide a high-level description of the “flow” mechanism at first and then describe why/how it works without any technical terms. If you can provide a single example where “flow” can help with, it would be nicer to follow it.\n\n# Some questions on the experiment\nThe FLOW method seems to have much more computation than single-turn baselines (i.e., BiDAF). Any comparison on computational cost?\n\nIn Table 3, most of the improvements for QuAC come from the encoding N answer spans to the context embeddings (N-ans). Did you also compare with (Yatskar, 2018) with the same setting as N-ans? \n\nI would be curious to see for each context representation (c), which of the feature(e.g., c, em, g) affect the improvement the most? Any ablation on this?\n\nThe major and the most contribution of the model is probably the RNN of the context representations and concatenation of the context and question at turn in Equation (4). For example, have you tested whether simple entity matching or coreference links over the question thread can help the task in some sense? \n\nLastly for the model design, which part of the proposed method could be general enough to other tasks? Is the proposed method task-specific so only applicable to conversational MC tasks or restricted sequential semantic parsing tasks? \n", "In this paper, authors proposed a so-called FLOWQA for conversational question answering (CoQA). Comparing with machine reading comprehension (MRC), CoQA includes a conversation history. Thus, FLOWQA makes use of this property of CoQA and adds an additional encoder to handle this. It also includes one classifier to handle with no-answerable questions.\n\nPros:\nThe idea is pretty straightforward which makes use of the unique property of CoQA.\n\nResults are strong, e.g., +7.2 improvement over current state-of-the-art on the CoQA dataset. \n\nThe paper is well written.\n\nCons:\nIt is lack of detailed analysis how the conversation history affects results and what types of questions the proposed model are handled well.\n\nLimited novelty. The model is very similar to FusionNet (Huang et al, 2018) with an extra history encoder and a no-answerable classifier. \n\nQuestions:\nOne of simple baseline is to treat this as a MRC task by combining the conversation history with documents. Do you have this result?\n\nThe model uses the full history. Have you tried partial history? What's the performance? \n" ]
[ -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "SyxpAA4c2Q", "SJeOXH0VAm", "SyxpAA4c2Q", "SylB4hN5hQ", "HJeLfagi3Q", "iclr_2019_ByftGnR9KX", "iclr_2019_ByftGnR9KX", "iclr_2019_ByftGnR9KX" ]
iclr_2019_ByfyHh05tQ
Learning to Design RNA
Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation. Since an RNA's function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints. Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA. LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure. By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks. Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process. Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. In an ablation study, we analyze the importance of our method's different components.
accepted-poster-papers
After a healthy discussion between reviewers and authors, the reviewers' consensus is to recommend acceptance to ICLR. The authors thoroughly addressed reviewer concerns, and all reviewers noted the quality of the paper, methodological innovations and SotA results.
train
[ "BklQ-Y9PkV", "rkeF2D9PJV", "Syl-dLf4yN", "rJxME7y-yE", "HJgr7SfxnX", "BkAgSNekN", "Bygfptj6A7", "BkxvU5i6Cm", "r1e-JqspC7", "H1eWdO9lTQ", "HylyHJLcCQ", "SJgLJQJ8Am", "ByguVuyUCQ", "B1ewpv1ICm", "BklaXw1LCm", "B1ecw8JLAX", "SkgXlBJLRX", "rkxet6Pa27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Thanks, we did not think of the interpretation \"Neural (Architecture Search)\". Now being very aware of the two different possible interpretations of NAS, we will be sure to use a wording that avoids the confusion. Thanks again!", "Thanks for these references. We already cited the first two in Section 5\n(Joint Architecture and Hyperparameter Search, top of page 6), but we now\nthink that a brief paragraph on architecture search and hyperparameter\noptimization in the related work section would be useful as well, where all\nthese references will be a natural fit. Thanks again for the helpful\nfeedback and for increasing your rating!", "Thanks for updating the text to avoid confusion.\n\nI suppose the interpretation is dependent on parentheses. :) \n(Neural Architecture) Search is [presumably] yours\nNeural (Architecture Search) is mine\n\nI would argue that the paper by Quoc Le's group does not call their method NAS. Rather they frame it as 'searching the NASNet space' with an evolutionary strategy.", "References hyperparameter optimization:\n* https://arxiv.org/abs/1808.05377\n* https://arxiv.org/abs/1611.01578\n* https://arxiv.org/pdf/1807.07663.pdf", "General comment\n==============\nThe authors used policy gradient optimization for generating RNA sequences that fold into a target secondary structure, reporting clear accuracy and runtime improvements over the previous state-of-the-art. The authors used BOHR for optimizing hyper-parameters and present a new dataset for evaluating RNA design methods. The paper is well motivated and mostly clearly written. However, the methodological contributions are limited and I have some important concerns about their evaluation. Overall, I feel it’s a good paper for an ICLR workshop or biological journal if the authors address the outstanding comments.\n\nMajor comments\n=============\n1. The methodological contributions are limited. The authors used existing approaches (policy gradient optimization and BOHR for hyperparameter optimization) but do not report new methods, e.g. for sequence modeling. Performing hyper-parameter optimization is in my eyes not novel, but common practice in the field. It would me more informative if the authors compared reinforcement learning to other approaches for (conditional) sequence generations, e.g. RNNs, autoregressive models, VAEs, or GANs, which have been previously reported for biological sequence generation (e.g. http://arxiv.org/abs/1804.01694).\n\n2. Did the authors split all three datasets (Eterna, Rfam-Taneda, Rfam-learn-test) into train, eval, and test set, trained their method on the training set, optimized hyper-parameters on the eval set, and measured generalization and runtime on the test set? This is not described clearly enough in section 5. I suggest to summarize the number of sequences for each dataset and split in a table.\n\n3. Did the authors also optimize the most important hyperparameters of RL-LS and other methods? Otherwise it is unclear if the performance gain is due to hyperparameter optimization or the method itself.\n\n4. The time measurement (x-axis figure 3) is unclear. Is it the time that methods were given to solve a particular target structure and does figure 3 show the average number of solved structures in the test for a the time shown on the x-axis? \n\n5. Were all methods compared on the same hardware (section 5; 20 cores; Broadwell E5-2630v4 2.2 GHz CPUs) and can they be parallelized over multiple CPU or GPU cores? This is essential for a fair runtime comparison.\n\n6. The term ‘run’ (“unreliable outcomes in single runs”, section 4) is unclear. Is it a single sample from the model (one rollout), a particular hyperparameter configuration, or training the model once for a single target structure? This must be clarified for understanding the evaluation.\n\n7. How does the accuracy and runtime or LEARNA scale depending on the sequence (structure) length?\n\n8. How sensitive is the model performance depending on the context size k for representing the current state? Did the authors try to encode the entire target structure with, e.g. recurrent models, instead of using a window centered on the current position?\n\n9. The authors should more clearly describe the local optimization step (section 3.1; reward). Were all nucleotides that differ mutated independently, or enumerated exhaustively? The latter would have a high runtime of O(3^d), where d is the number of nucleotides that differ. When do the authors start with the local optimization? \n\nMinor comments\n=============\n10. The authors should replace ‘450x’ faster in the abstract by ‘clearly’ faster since the evaluation does not show that LEARNA is 450x faster than all other methods.\n\n11. Does “At its most basic form” (introduction) mean that alternative RNA nucleotides exist? If so, this should be cited.\n\n12. The authors should more clearly motive in the introduction why they created a new dataset.\n\n13. The authors should mention in section 2.1 that the dot-bracket notation is not the only notation for representing RNA structures (https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/rna_structure_notations.html).\n\n14. The authors should define the hamming distance (section 2.1). Do other distance metrics exist?\n\n15. For the Traveling Salesman Problem (section 2.2) should the reward be the *negative* tour length?\n\n16. The authors should more clearly describe the embedding layer (section 4). Are nucleotides one-hot encoded or represented as integers (0, 1 for ‘(‘ and ‘.’)?", "Thanks for your final answers and changes. I increased the rating of your paper to 8. Tl;DR;\n* methodological contributions existing but incremental\n* comprehensive evaluation and experiments\n* strong application paper overall", "# 3. Hyperparameter optimization\n\"Please highlight in the main text that hyperparameters were only optimized for LEARNA and that other methods might also benefit by rigorously optimizing both model as well as optimization hyperparameters.”\n\n---> Thanks, we will definitely highlight in the final version which methods were optimized on what data set, and that other methods could benefit from that as well. (The server does not allow us to upload a new version at this time.)\n\n\n# 7. How does the accuracy and runtime scale depending on the sequence (structure) length?\n\"Thanks for the additional runtime analysis. Does each dot correspond to one target structure in the test set? Why are you showing the ‘minimum solution time’? Does the runtime vary over multiple runs? If so, it is more fair to show the average run time.”\n\n---> Yes, indeed, every point corresponds to a single target structure in the test set. We decided to plot the minimum (1) to account for missing data due to sequences not being solved within the time limit by individual runs and (2) since the minimum is used as the benchmarking criterion throughout the related RNA Design literature. However, we have addressed the average performance in Figure 3 (performance over time) which (also) shows the average solution time and in Tables 6-8 which list the number of solved sequences for various numbers of runs. Our approach performs well in both of these regards.\n\n\n# 8. How sensitive is the model performance depending on the context size \\kappa for representing the current state?\n\"Given that sequences can be hundreds of nucleotides long, I agree that RNNs would be slow and sensitive to exploding/vanishing gradients. You can consider non-recurrent models such as dilated CNNs or transformers in the future.”\n\n---> Thanks, we will adjust the axis labels for the plots for the final version to be more consistent with the text. The sequence length is indeed challenging, and thank for your suggestion for future work, we’ll include these models in the ones we are planning to study next.\n\n\n# 9. Local improvement step\n\"Thanks for clarifying the local improvement step (LIS). Figure 9 indicates LIS clearly boost performance, which is an important finding. Can you highlight this in the main text? Are other methods also likely to benefit from a post-hoc LIS?”\n\n---> Thanks, we did mention the importance of this step in our ablation study in Section 6.2 where we discuss Figure 9, but will do so more explicitly in the final version. The way we view this step, it is a very limited local search applied when proposed sequences almost folds into the target structure; most of the other methods we compare against are local search methods that take a long time until they get close to the target structure; we would expect that applying this local improvement step would likely slow them down. We do, however, believe that this step could also benefit other generative models, such as MCTS-RNA, but we have not tried to incorporate it into MCTS-RNA; we will point out the possibility of doing so more explicitly in the final version.", "“I'm happy with the revisions the authors have made, as I find that they call out the novel contributions a bit more explicitly. Specifically I see some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. I don't think calling this NAS is justified; calling it bayesopt or black box opt is fair. NAS uses a neural net to propose experiments over structured graphs of computation nodes. This work appears to be simpler hyperparameter optimization.”\n\n--> Thanks for the positive feedback, and for seeing some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. We agree that much of the current work on NAS does indeed use neural nets to propose experiments over structured graphs of computation nodes, and to not be confusing we’ll reword. For completeness, we would like to mention, however, that not all NAS methods fall into that category; specifically, most current NAS papers use a cell search space of fixed dimensionality, and the method that has the best published performance (regularized evolution, by Quoc Le’s group at Google Brain [https://arxiv.org/abs/1802.01548], better than reinforcement learning by the same group and others) does *not* use a neural network but a simpler hyperparameter optimization method with a fixed dimensionality approach through genetic algorithms. But this is really not important for this paper and we will simply reword to the non-contentious term “joint optimization of architectural choices, state description hyperparameters, and RL algorithm hyperparameters”. \n\nThanks again for the positive reply and update!\n", "Thank you for appreciating our detailed rebuttal and our revised manuscript. We also thank you for again pointing out that our work is a strong application paper (as we mentioned in our top level comment, applications are specifically listed as relevant in the ICLR call for papers, including applications in computational biology and other fields).\n\n\n# 1a. Hyper-parameter optimization\n\"I still believe that defining parameters of the neural network architectures in addition to optimization parameters is not a strong methodological contribution. This is rather common practice in reinforcement learning although often not described in detail in manuscripts. Methods for optimizing both discrete and continuous hyper-parameter had been described before, including Spearmint or Hyperopt. That said, I still believe that the paper is a strong application paper!”\n\n---> We fully agree that hyperparameter optimization is an integral part of machine learning and reinforcement learning in general. For our application, it was the key to success, as we did not a priori know which architecture or state space size would work best. For this reason, we automatically searched a fairly flexible space that included pure RNNs, pure CNNs, and mixtures of these with an additional MLP. This level of parametrization is rarely laid out, so we do hope that you agree to at least some novelty in this regard. (Indeed, if you know of a reference that searched over a combination of RNNs and CNNs before we would be very grateful to know about it to not falsely claim novelty in this regard.) \n\nSpearmint and TPE are useful tools in general. We expect that for our 14-dimensional space with many integer choices TPE would work better than Spearmint, and BOHB is a more efficient multi-fidelity variant of TPE (also see the BOHB paper for large speedups over TPE and Spearmint: proceedings.mlr.press/v80/falkner18a/falkner18a.pdf); our modest contribution in this regard is to provide a case study for this existing tool.\n\n\n# 2. [Training/Validation/Test split of the data sets].\n\"Do I understand you correctly that you proposed a ‘standard’ training, evaluation, and test set for Rfam-Learn, which does does not exist for Eterna100 or Rfam-Taneda? This is useful if the split is well defined (e.g. if the distribution of certain sequence properties is equal in all three sets), but not a strong contribution. Is the dataset larger than existing datasets, more diverse, or does it include additional sequences? I suggest to more clearly define differences in either the main text or appendix and more clearly motivate why Rfam-Learn is superior to existing datasets.”\n\n---> Yes, indeed, we proposed such a standard training/evaluation/test split, and these do not exist for Eterna100 or Rfam-Taneda. As described in the added Appendix C, we selected a subset of the Rfam database v13.0 based on difficulty (measured by number of known solutions and time it took MCTS-RNA to solve them) and controlled the distribution of sequence lengths across splits.\n\nOur data sets consist of 65000, 100, and 100 target structures (for training, validation, and test, respectively), based on naturally occurring RNA sequences. In contrast, Rfam-Taneda and Eterna100 contain only 29 and 100 sequences respectively. While the former also is a subset of the Rfam database, the latter consists of handcrafted sequences only. We included both in our work as they serve as the default test sets in the community. Our data sets are a “curated” selection of a larger corpus of natural RNA sequences allowing more data driven approaches to be applied. It is hard to compare RNA sequence datasets in terms of quantitative measures, but we tried to select an interesting collection that enables generalization across different RNA families. We hope this clarifies your questions regarding our new benchmark.", "I'm happy with the revisions the authors have made, as I find that they call out the novel contributions a bit more explicitly. Specifically I see some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. I don't think calling this NAS is justified; calling it bayesopt or black box opt is fair. NAS uses a neural net to propose experiments over structured graphs of computation nodes. This work appears to be simpler hyperparameter optimization.\n\n====\n\nQuality:\nThe work is well done, and the experiments are reasonable/competitive, showcasing other recent work and outperforming. \n\nClarity:\nI thought the presentation was tolerable. I was a bit confused by Table 1 until reading the prose at the bottom of page 7 indicated Table 1 is presenting percentages, not integer quantities. The local improvement step is not very clearly explained. Are all combos tried across all mismatched positions, or do we try each mismatched position independently holding the others to their predicted values? What value of zeta did you end up using? It seems like this is essential to getting good performance. It is completely unclear to me what the 'restart option' does.\n\nOriginality:\nUsing RL in this specific application setting seems relatively new (though also explored by RL-LS in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6029810/). On the other hand, the approach used doesn't seem to be substantially different than anything else typically used for policy gradient RL. The meta-learning approach is interesting, though again not too different from multi-task approaches (though these are perhaps less common in RL than in general deep learning).\n\nSignificance:\nLikely to be of practical utility in the inverse design space, specifically therapeutics, CRISPR guide RNA design, etc. Interesting to ICLR as an application area but probably not much theory/methods interest.\n\n\nOn balance I lean slightly against accepting and think this is a better fit to either a workshop or a more domain-specific venue (MLHC http://mucmd.org/ for example).", "I appreciate that you clearly addressed all comments and revised your manuscript! I have only few remaining comments.\n\n# 1a. Hyper-parameter optimization\nI still believe that defining parameters of the neural network architectures in addition to optimization parameters is not a strong methodological contribution. This is rather common practice in reinforcement learning although often not described in detail in manuscripts. Methods for optimizing both discrete and continuous hyper-parameter had been described before, including Spearmint or Hyperopt. That said, I still believe that the pap1` er is a strong application paper!\n\n# 2. [Training/Validation/Test split of the data sets].\nDo I understand you correctly that you proposed a ‘standard’ training, evaluation, and test set for Rfam-Learn, which does does not exist for Eterna100 or Rfam-Taneda? This is useful if the split is well defined (e.g. if the distribution of certain sequence properties is equal in all three sets), but not a strong contribution. Is the dataset larger than existing datasets, more diverse, or does it include additional sequences? I suggest to more clearly define differences in either the main text or appendix and more clearly motivate why Rfam-Learn is superior to existing datasets.\n\n# 3. Hyperparameter optimization\nPlease highlight in the main text that hyperparameters were only optimized for LEARNA and that other methods might also benefit by rigorously optimizing both model as well as optimization hyperparameters.\n\n# 7. How does the accuracy and runtime scale depending on the sequence (structure) length?\nThanks for the additional runtime analysis. Does each dot correspond to one target structure in the test set? Why are you showing the ‘minimum solution time’? Does the runtime vary over multiple runs? If so, it is more fair to show the average run time.\n\n# 8. How sensitive is the model performance depending on the context size \\kappa for representing the current state? \nGiven that sequences can be hundreds of nucleotides long, I agree that RNNs would be slow and sensitive to exploding/vanishing gradients. You can consider non-recurrent models such as dilated CNNs or transformers in the future.\n\nThanks clarifying that \\kappa corresponds to the ‘state_radius’ in Appendix I. For consistency, I suggest to change the x-axis title to ‘state_radius \\kappa’ or ‘\\kappa’.\n\n# 9. Local improvement step\nThanks for clarifying the local improvement step (LIS). Figure 9 indicates LIS clearly boost performance, which is an important finding. Can you highlight this in the main text? Are other methods also likely to benefit from a post-hoc LIS?", "We would like to thank all reviewers for their helpful comments! In response to them we performed additional analysis, updated the paper and now reply to all reviews at the same time in order to limit the overhead for the reviewers.\n\nSince these were comments several reviewers had, we would like to comment on (1) the aspect of being an application paper and (2) the novelty of our methods.\n\n(1) We are glad that several reviewers found the application we are tackling interesting. We would like to note that applications are specifically listed as relevant in the ICLR call for papers (https://iclr.cc/Conferences/2019/CallForPapers), including applications in computational biology and others. We believe that a strong application paper takes existing methods and applies them to an interesting and difficult problem of a certain significance. In the process, the formulation of the problem, and technical details need to be adjusted to make it work. Additionally, a thorough evaluation comparing the method to other state-of-the-art approaches from the field and analyzing the importance of components (e.g. via ablation) is vital. We feel that we accomplished these in our work, and our reviews also indicate that the reviewers agree.\n\n(2) Having said that, we in fact also believe that our work is novel in many ways other than this application. While hyperparameter optimization is clearly standard in RL, to the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network, the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper’s novelty, and we agree with the reviewers that we should have made these much clearer in the submitted version of our paper; we are thankful to the reviewers’ comments and have fixed this now.\n\nWe would also like to note that, e.g., the popular population-based-training (PBT) method for tuning RL hyperparameters, is limited to optimizing hyperparameters that can be adapted during the optimization trajectory, while our approach also handles the tuning of other choices, such as the neural architecture and the state representation. As such, our paper can be viewed as an important step towards “automated reinforcement learning”, applied to a real-world problem (which we also believe to be novel).\n\n\nWe made the following changes to the paper in response to the reviews:\n\n* Relating to (2) above, we clarified the novelty of our joint and automated architecture and hyperparameter search and added subsections to distinguish between the search space and the search procedure in the corresponding section.\n\n* We added a parameter importance analysis to Section 6 (experiments) which supports the importance of the joint optimization of the policy network’s architecture, the environment parameters and the training hyperparameters.\n\n* We explained our experimental protocol better, including more details on the used datasets from the literature and the dataset we compiled ourselves.\n\n* We split our previous background section into two distinct sections, one for explaining the RNA-Design problem and one for discussing related work.\n\n* We restructured the appendix, included plots that compare the performance of all approaches across different sequence lengths (Appendix J) and show the strong scaling of our approaches with sequence length, and added more analysis regarding our joint architecture and hyperparameter optimization.\n\n* We incorporated clarification and discussion where indicated by the reviewers. We detail these changes in our responses to the individual reviewers.", "Thanks for your helpful comments and questions. Thanks also for your positive feedback on our work in general, our experiments, the significance of our approach for therapeutics and other practical use cases and for characterizing our work as interesting to ICLR as an application area. We would like to comment on your suggestions, comments and questions in the following.\n\n\n“1. I was a bit confused by Table 1 until reading the prose at the bottom of page 7 indicated Table 1 is presenting percentages, not integer quantities.”\n\n--> Reviewing Table 1, we agree that it could be confusing -- its caption did mention that all entries represent percentages and not total values, but this was unnecessarily indirect, and we now reworked the tables to include a percentage symbol to make it clearer.\n\n\n“2. The local improvement step is not very clearly explained. Are all combos tried across all mismatched positions, or do we try each mismatched position independently holding the others to their predicted values? What value of \\xi did you end up using? It seems like this is essential to getting good performance.”\n\n--> Thanks, we agree that the local improvement step should be described more clearly and that it is an important part of our approach (as the empirical evidence in our ablation study suggests). We have since reworked the corresponding paragraph and included pseudocode (Appendix A). It works as follows: we exhaustively try all possible nucleotide assignments for the mismatched positions which takes at most 4^|differing_sites| additional folds. The value of \\xi we used was 5, i.e., we used the local improvement step if the number of differing sites was at most 4. This was set early on based on runtime considerations and preliminary experiments and was not part of our hyperparameter optimization; thank you for the detailed reading of our paper and pointing out this missing value, we have added it now.\n\n\n“3. It is completely unclear to me what the 'restart option' does.”\n\n--> Thanks for pointing out this missing information. Since RL algorithms are prone to getting stuck in local minima, we decided to employ occasional restarts (i.e., reinitialization) in our strategies. We now describe this in the revised version in Section 5. For LEARNA and for Meta-LEARNA-Adapt, this makes a difference, whereas for Meta-LEARNA it does not since Meta-LEARNA is directly sampling from the model without updating it (which is equivalent to restarting at each step)\n\n\n“4. Using RL in this specific application setting seems relatively new (though also explored by RL-LS in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6029810/).” \n\n--> Thanks for this comment! Indeed, the reinforcement learning guided local search (RL-LS) was developed in parallel and independently from LEARNA (as mentioned in our discussion on RL-LS in Section 2.2 of our initial submission; now discussed in Section 3). However, the two approaches differ a lot: although both approaches employ RL to RNA Design, Eastman et al. follows the common approach of using a local search strategy for solving the RNA Design problem, while we try to tackle the problem with a generative model.\n\n\n“5. On the other hand, the approach used doesn't seem to be substantially different than anything else typically used for policy gradient RL. The meta-learning approach is interesting, though again not too different from multi-task approaches (though these are perhaps less common in RL than in general deep learning).”\n\n--> We agree that the policy gradient approach we use is standard, but that using meta-learning in this context is already less common. We would also like to repeat the point concerning novelty of our joint optimization we made in the general reply to all reviewers. We copied this here for convenience:\n\n<“To the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network (including both recurrent connections and convolutions in a single search space), the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper’s novelty, and we should have made these clearer in the submitted version of our paper; we’ve fixed this now in Section 5 and in the introduction.”>\n\n\nThanks again for your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\n", "Thanks for the suggested improvements, the insightful comments and questions! Thanks also for the positive feedback on the text of the paper, references and motivation. In the following we provide detailed replies:\n\n\n“1. What is the star (*) superscript for? Was expecting the length of the RNA sequence instead.”\n\n--> Thank you for pointing out this undefined and potentially confusing use of notation. The Kleene Operator (*) applied to a set M yields a set of all finite-length sequences based on M, and we used it since RNA Structures have variable length. But we do agree that this can be confusing and made changes to talk about a specific structure w and then use N^|w| as you suggested. \n\n\n“2. Same on p4, when introducing the notation of your decision process $ D_w $, explicitly introduce all the ingredients.”\n\n--> We agree with you and revised the definition of the undiscounted decision process. We now explicitly name the components of the quadruple D_w and also refer to the specifics in the paragraphs following the definition of D_w.\n\n\n“3. in Equation (2) on p4, maybe clarify the notation with '.', '(' and ')' for example as the reader could really struggle.”\n\n--> We have looked at this again and changed the equation, making it easier to parse for the reader. We have also included a verbatim “dot” and “opening bracket” to not confuse the reader by the notation.\n\n\n“4. I didn't really understand the message in Section 4, not being an expert in the field. Could you clarify your contribution here?”\n\n--> Thanks for asking about this! As detailed in our general reply to all reviewers, this section breaks novel ground concerning the joint optimization of neural architectures and hyperparameters, joint search over combinations of recurrent and convolutional layers in the same search space, neural architecture search for RL, and neural architecture search for meta-learning. In the interest of brevity, we refer to the detailed reply to all reviewers above.\n\n\n“5. your 'Ablation study' in Section 5.2; does it correspond to true uncertainty/noise that could be observed in real data?”\n\n--> In our ablation study, we disable one functional component of our approach at a time in order to study its influence; incorporating ablations in empirically evaluated work is important to find out whether all proposed components are necessary and contribute to the final performance. Our ablation study is performed on the test split of our introduced dataset, which as we point out in the heading of Section 5 of our initial submission, has been generated from sequences observed in living organisms as listed in the Rfam 13.0 database; it is not used to optimize hyperparameters but is a post hoc evaluation.\n\n\n“6. why a new benchmark data set, when there exist good ones to compare your method to, e.g. in competitions like CASP for proteins?”\n\n--> We report our results on two widely used benchmarks which were also used in the work we compare to but unfortunately only provide test sets (no training/validation/test split). To the best of our knowledge, we introduce the first benchmark with an explicit training/validation/test split. The reviewer is right in that there exist other and good data sources, but to the best of our knowledge not in the form of competitions. To mention two databases by name:\n\n* the STRAND database (http://www.rnasoft.ca/strand/) that currently holds 4666 known RNA secondary structures\n* the FRABASE 2.0 database (https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-231) with 2753 entries of fragments of secondary structures\n\nBoth databases have not been used by the publications we compare to and cannot satisfy the size and sequence diversity requirements for our meta-learning approach and future research (especially for methods needing a large training set). The Rfam 13.0 database we use here for generating our new training-, validation- and test set is large enough to yield three distinct datasets of meaningful sizes and diversity.\n\n\n“7. do you make your implementation available?”\n\n--> Thanks for the question, indeed, we strongly believe in sharing code (as well as data) to reproduce scientific findings. To stand by this opinion, we had included a note in the conclusion of our initial submission that we will make all of our code and data available upon acceptance of our paper.\n\n\n“8. quite like the clarification of the relationship of your work to that of Eastman et al. 2018. Could you also include discussions to other papers, e.g. Chuai et al. 2018 Genome Biol and Shi et al. 2018 SentRNA on arXiv”\n\n--> Thanks for the positive feedback regarding our discussion of the relationship of our work to that of Eastman et al. 2018, and for bringing the related work to our attention. We included discussions in our related work section.\n\n\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.", "Thanks for your positive feedback regarding our motivation and general writing, the characterization of our paper as a good application paper and for your comments, questions and helpful suggestions. In the following we reply to your comments and clarify some of the points:\n\n\n“1a. The methodological contributions are limited. Performing hyper-parameter optimization is in my eyes not novel, but common practice in the field.”\n\n--> We agree that hyperparameter optimization is clearly standard in RL, but our work goes much further than that. A joint optimization over neural architectures and hyperparameters, to the best of our knowledge, is novel in the field of RL (and is also not common in supervised learning). We would also like to repeat point (2) from our general reply to all reviewers concerning novelty, copied here for convenience:\n\n<“To the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network (including both recurrent connections and convolutions in a single search space), the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper’s novelty, and we should have made these clearer in the submitted version of our paper; we’ve fixed this now in Section 5 and in the introduction.”>\n\n\n“1b. Related work; It would me more informative if the authors compared reinforcement learning to other approaches for (conditional) sequence generations, e.g. RNNs, autoregressive models, VAEs, or GANs, which have been previously reported for biological sequence generation (e.g. http://arxiv.org/abs/1804.01694).”\n\n--> Thanks for the helpful comment on the interesting work in the fields of protein design and biological sequence generation. In our revised related work section we did include a discussion on the general field of matter engineering and reference a very recent review on generative approaches for this field. We did not experiment with VAEs or GANs (with appendix, our paper is already 30 pages...) but consider that future work. However, concerning RNNs, as described in Section 5, these were in fact part of our design space and were selected by the joint optimization process for two out of three final configurations used in our experiments (see Table 4 in Appendix A of our initial submission; in the revised version this is Table 5 in Appendix E).\n\n\n“2. [Training/Validation/Test split of the data sets]” and \n“12. The authors should more clearly motivate in the introduction why they created a new dataset.”\n\n--> The benchmarks used in the recent RNA Design literature Eterna100 (100 datapoints) and Rfam-Taneda (29 datapoints) do not have a train/validation/test split associated with them. (As ML researchers, we were surprised about this, too...) Hence, the need for a training and validation set of adequate size and diversity motivated us to introduce Rfam-Learn, which to the best of our knowledge is the first RNA Design benchmark with an explicit training/validation/test split.\n\nWe optimized each of our approaches using only our own validation set (Rfam-Learn-Validation) and for our meta learning approach only used our own training set (Rfam-Learn-Train). To measure the final performance, as well as the transferability of the found architecture, hyperparameters, and the trained policy (Meta-LEARNA), the best configuration of each of our methods was then tested on Eterna100, Rfam-Taneda and Rfam-Learn-Test, and they achieve state-of-the-art results on all of them.\n\nWe incorporated changes to clarify the above points and we thank you for the suggestion to use a table to display benchmark information as it indeed conveys the information more clearly.", "“3. Hyperparameter optimization of other methods; Did the authors also optimize the most important hyperparameters of RL-LS and other methods? Otherwise it is unclear if the performance gain is due to hyperparameter optimization or the method itself.”\n\n--> We assess the performance of all methods on three test sets, where our method was trained and optimized using a single designated dataset for training and validation. The other methods we compare to either do not have clear/exposed hyperparameters (RNAinverse), were optimized by the original authors either also on a subset of the Rfam database (AntaRNA, and MCTS), or optimized on a non-disclosed dataset (RL-LS).\nAdditionally, the authors of RL-LS, state in their paper: ”A more rigorous hyperparameter search might improve our results somewhat, but would probably not dramatically change the model's performance.”.\n\nOur empirical evaluation focuses more on generalization rather than optimizing the hyperparameters to every dataset. That is why we optimized each of our approaches (LEARNA and Meta-LEARNA) using only our own validation set. For our meta-learning approaches (Meta-LEARNA, Meta-LEARNA-Adapt) the single best configuration was then evaluated on the three test sets without modification and still surpassed the state-of-the-art. Potentially, all methods could be improved by further optimization on each type of dataset, but this was not our focus.\n\n\n“4. The time measurement (x-axis figure 3) is unclear. Is it the time that methods were given to solve a particular target structure and does figure 3 show the average number of solved structures in the test for the time shown on the x-axis?” and\n“6. The term ‘run’ (“unreliable outcomes in single runs”, section 4) is unclear. Is it a single sample from the model (one rollout), a particular hyperparameter configuration, or training the model once for a single target structure? This must be clarified for understanding the evaluation.”\n\n--> Thanks, you are right to point out that these two points were unclear. We believe this was due to an inconsistent usage of the term “run”. In Section 4 of our initial submission (joint architecture and hyperparameter optimization) we referred with “run” to a full optimization of the policy and in Section 5 of our initial submission (experiments) we referred with “run” to an “evaluation run” which consists of evaluating a given method once on each target structure in the corresponding benchmark. An evaluation run can be visualized by plotting the number of solved target structures across the time spent on each particular target structure. Existing benchmarks for RNA Design consider a number of evaluation runs and use the total number of target structures that were solved in at least one of these evaluation runs as the objective. Hence, Figure 3 visualizes aggregates of all evaluation runs: On the left side of Figure 3 we plot the total number of target structures that were solved in at least one evaluation run across time spent on each particular target structure, and similarly, the right side of Figure 3 shows the average number of solved target structures. Thank you very much for pointing out this issue, we disambiguated the terms and worked on clarity.\n\n\n“5. Were all methods compared on the same hardware (section 5; 20 cores; Broadwell E5-2630v4 2.2 GHz CPUs) and can they be parallelized over multiple CPU or GPU cores? This is essential for a fair runtime comparison.”\n\n--> We agree that this is essential for a fair comparison and as we noted in the header of our experiments section in our initial submission all computations were done on the same listed CPU model. As mentioned in our initial submission, the training stage of Meta-LEARNA uses 20 cores (we use parallel PPO), but at validation/test time all methods were only allowed a single core (using core binding).\n\n\n“7. How does the accuracy and runtime scale depending on the sequence (structure) length?”\n\n--> Thank you for asking this important question. We have now included plots for solution times across sequence lengths (Appendix J), which clearly indicate that our approaches scale very well and are not affected a lot by increasing sequence length.", "“8. How sensitive is the model performance depending on the context size \\kappa for representing the current state? Did the authors try to encode the entire target structure with, e.g. recurrent models, instead of using a window centered on the current position?”\n\n--> Thanks for the suggestion. An RNN is already included in our search space, and was indeed selected by our joint architecture search and hyperparameter optimization. We have not yet experimented with encoding the entire target structure with an RNN, since having to backpropagate through that RNN at each time step of our agent would lead to a substantial increase of computational cost, be harder to train and increase the number of hyperparameters. Having said that, we do think this is a good idea if it can be made computationally efficient, e.g., by learning the embedding offline (although the training signal for that would need to be defined first); since this is not straightforward we leave it to future work.\n\nIn terms of the importance of the context size, our new hyperparameter importance in Appendix I indicates that the context size (state space radius) \\kappa does not appear to be very important.\n\n\n“9. The authors should more clearly describe the local optimization step (section 3.1; reward). Were all nucleotides that differ mutated independently, or enumerated exhaustively? The latter would have a high runtime of O(3^d), where d is the number of nucleotides that differ. When do the authors start with the local optimization?”\n\n--> We agree that the local improvement step should be described more clearly: we revised the reward paragraph and included pseudocode for computing the reward using the local improvement step (Appendix A). It works as follows: After the policy rollout we fold the candidate solution and compare it to the target structure, if less than \\xi sites differ we perform this local improvement step in order to compute the reward. The value of \\xi is not part of the hyperparameter optimization and based on the runtime costs and preliminary experiments we set xi=5, i.e., we used the local improvement step if the number of differing sites was at most 4. Keeping this number low was indeed important because of the computational complexity mentioned by the reviewer (it’s actually O(4^d), with d<=4). \n\n\n“10. The authors should replace ‘450x’ faster in the abstract by ‘clearly’ faster since the evaluation does not show that LEARNA is 450x faster than all other methods.”\n\n--> Thank you for the comment, we changed the abstract to say that our approach achieves new state-of-the-art performance on all benchmarks while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. We note that these speedups (including the 450x one on the Eterna100 benchmark, Figure 3 (top)) can clearly be seen in the evaluation plots.\n\n\n“11. Does “At its most basic form” (introduction) mean that alternative RNA nucleotides exist? If so, this should be cited.”\n\n--> Thanks for this question. With “At its most basic form” we refer to the most basic structural form of RNA, which is a sequence of nucleotides. We have since clarified the phrasing to “At its most basic structural form”.\n\n\n“13. The authors should mention in section 2.1 that the dot-bracket notation is not the only notation for representing RNA structures (https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/rna_structure_notations.html).” and\n“14a. The authors should define the hamming distance (section 2.1).”\n\n--> We have included references, thank you for your comments.\n\n\n“14b. Do other distance metrics [than the hamming distance] exist?”\n\n--> While not formally metrics, we have experimented with the paired-unpaired-ratio and derivatives of the hamming distance. While also not a metric, the GC-content (which is the ratio of G and C nucleotides to the U and A nucleotides) has been used in the RNA Design literature (e.g. by antaRNA) as an additional objective.\n\n\n“15. For the Traveling Salesman Problem (section 2.2) should the reward be the *negative* tour length?”\n\n--> You are of course right, thank you for reading our paper carefully and bringing this to our attention; we fixed it.\n\n\n“16. The authors should more clearly describe the embedding layer (section 4). Are nucleotides one-hot encoded or represented as integers (0, 1 for ‘(‘ and ‘.’)?”\n\n--> Thank you for this comment; we agree and have included a clearer description. For representing nucleotides, our automated reinforcement learning approach includes the choice between: 1) a binary encoding differentiating between paired and unpaired sites, and 2) a learned embedding layer whose dimension is a hyperparameter (only active if the learned embedding is selected). \n\n\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\n", "This work tackles the difficult RNA design problem, i.e. that of finding a RNA primary sequence that is going to fold into a secondary/tertiary structure able to perform a desired biological function. More specifically, it used Reinforcement Learning (RL) to find the best sequence that will fold into a target secondary structure, using the Zuker algorithm and designing a new primary sequence 'from scratch'. A new benchmark data set is also introduced in the paper along .\n\nQuestions/remarks:\n - I struggle with your notations as soon as section 2.1. What is the star (*) superscript for? Was expecting the length of the RNA sequence instead. Same on p4, when introducing the notation of your decision process $ D_w $, explicitly introduce all the ingredients.\n - in Equation (2) on p4, maybe clarify the notation with '.', '(' and ')' for example as the reader could really struggle.\n - I didn't really understand the message in Section 4, not being an expert in the field. Could you clarify your contribution here?\n - your 'Ablation study' in Section 5.2; does it correspond to true uncertainty/noise that could be observed in real data?\n - why a new benchmark data set, when there exist good ones to compare your method to, e.g. in competitions like CASP for proteins?\n - do you make your implementation available?\n - quite like the clarification of the relationship of your work to that of Eastman et al. 2018. Could you also include discussions to other papers, e.g. Chuai et al. 2018 Genome Biol and Shi et al. 2018 SentRNA on arXiv?\n\nAltogether the paper reads well, seems to have adequate references, motivates and proposes 3 variations of a new algorithm for a difficult learning problem. Not being an expert in the field, I just can't judge about the novelty of the appraoch." ]
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "Syl-dLf4yN", "rJxME7y-yE", "BkxvU5i6Cm", "BkAgSNekN", "iclr_2019_ByfyHh05tQ", "Bygfptj6A7", "HylyHJLcCQ", "H1eWdO9lTQ", "HylyHJLcCQ", "iclr_2019_ByfyHh05tQ", "SkgXlBJLRX", "iclr_2019_ByfyHh05tQ", "H1eWdO9lTQ", "rkxet6Pa27", "HJgr7SfxnX", "HJgr7SfxnX", "HJgr7SfxnX", "iclr_2019_ByfyHh05tQ" ]
iclr_2019_Byg0DsCqYQ
Robust Conditional Generative Adversarial Networks
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision. The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise. The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.
accepted-poster-papers
The proposed method suggests a way to do robust conditional image generation with GANs. The premise is to make the image to image translation model resilient to noise by leveraging structure in the output space, with an unsupervised "pathway". In general, the qualitative results seem reasonable on a a number of datasets, including those suggested by reviewers. The method appears simple, novel and easy to try. The main concerns seem to be that the idea is maybe too simple, but I'm not particularly bothered by that. The authors showed it working well on a variety of tasks (synthetic and natural), provide SSIM numbers that look compelling (despite SSIM's short-comings) and otherwise give compelling arguments for the technical soundness of the approach. Thus, I recommend acceptance.
train
[ "Bkln1phBJV", "rJgrCJ8rkE", "B1x_dKfthX", "SygbgUr527", "HkehuKRZJ4", "S1gdUGDQAm", "HklhCxV2a7", "rJev0VNha7", "BJgoRH1vaQ", "SJxl7bVhpQ", "Hkl9noywp7", "HkewAY1Pam", "SyeeXquTh7" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Dear reviewer,\n\nWe are thankful to the reviewer for their vote for acceptance after our revision.\n\nWe are happy to answer any further question, to address any potential remaining concern. We thank the reviewer for the feedback which has helped improve the quality of our manuscript.", "Dear reviewer,\n\nthanks once again for the effort to review our manuscript and reply to our revision. We strongly believe that the revisions have improved our manuscript.\n\nBased on the reply, we hold that we addressed the comments the reviewer raised. If not, it would be beneficial for us to understand any further concerns.\n\nThe main issues the reviewer raised were twofold: the comparisons in the experiments and the contribution of our work. For the former, we trained a method that quantifies the representation power of each network (i.e. upper bounds the compared methods). We addressed the novelty in the revised text explicitly (also elaborated in the general comment). Are there any additional arguments or experiments that would persuade the reviewer for the significance of robustness in dense regression tasks? \n\nIn addition to the aforementioned revisions, we have included additional experiments (imagenet) and a new metric (see general comment). Based on our revision, reviewers 2 and 3 posted a positive response and increased their ratings.\n\nWe would therefore ask the reviewer whether there are any additional concerns that we could address in our manuscript. This is significant, especially given that we have made our effort to conduct all the requested experiments and even extend our manuscript (e.g. devise bounds for our method). ", "Authors propose to augment a conditional GAN model with an unsupervised branch for spanning target manifold and show better performance than the conditional GAN in natural scene generation and face generation.\n\nHowever the novelty is limited and not well explained.\n1.Similar idea of using an autoencoder as another branch to help image generation has been proposed in Ma et al.’s work. \nLiqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, Mario Fritz. Disentangled Person Image Generation, CVPR 2018.\n\n2. In the paper authors claim that skip connection makes it harder to train the longer path, which is kind of contradictory to what is commonly done in tasks of image classification, semantic segmentation and depth estimation. Can authors explain this claim?\nIn addition, it is not clear why maximizing the variance can address the challenge of training longer path.\n\n3. In Table 1, the improvement over baselines is small in case of sparse inpaint setting.\n\n4. In Figure 4, the fourth row is more blurry than the third row although with less artifacts like black dots.\n\n\n%%%%%%%% After rebuttal %%%%%%%%\n\nI appreciate authors' efforts to address my comments and am satisfied with their response. I will change decision from rejection to acceptance.", "General:\nIn general, this is a well-written paper. This work focuses on the robustness of conditional GAN(RoC-GAN) when facing the noise. The authors claim the generator of RoC-Gan will span the target manifold, even in the presence of large amounts of noise. The main contribution of the paper is to introduce a two-pathway model, where one of them is used to perform regression as ordinary GAN while the other one helps the whole model span the target domain.\n\nStrength:\n1. The idea is simple and straightforward. The authors provide necessary theoretical analysis and empirical validation for their model. \n2. The proposed method seems technically correct to me. i.e. Although I am not very sure how well it works in practice, the idea is fine.\n\nPossible Improvements:\n1. I agree adding another auto-encoder as a helper may give better generation results by spanning the whole target space, but I don't think this constraint is strong enough in practice. \n2. In section 3.3, the time complexity of computing 'L_deconv' seems extremely large. From the perspective of numerical optimization, optimizing such a matrix will cause trouble if the dimension of weight matrices are large. i.e. optimizing the high-dimensional covariance matrix seems a problem to me.\n3. The experiments looks good. The experiments could be more convincing if using more complex data sets(e.g. CIFAR10, ImageNet) besides CelebA. My concern for using such data sets(the resolution of images is low and the distribution is simple) is that: although the noise seems to corrupt most of the image, the distribution of the image is not complex, so the generative model can recover it easily. Since this is a more empirical paper, the experiments should be more convincing.\n\nConclusion:\nThe author(s) are thoughtful and they put lots of work on this paper. The proposed method is simple. For novelty and significance, I think the idea is not very fancy to me. I am not very convinced by the method proposed in the paper. Although the paper demonstrates the robustness of their model with different experiments, most of them were not performed on deep neural networks and complicated data sets. As a conclusion, I vote for weak rejection.\n\nMinor suggestion:\nIncrease the resolution of the figures.\n\n------------------------------- After Rebuttal ---------------------------------\nI am very satisfied with the authors' response, so I will change my vote from rejection to acceptance.", "Hello, \n\nthanks once again for the time and effort to review our manuscript. Given that the rebuttal period is closing soon, we would appreciate any additional feedback and re-assessment of our submission. \n\nThe main concerns of the reviewer were twofold: i) experimenting on a more established dataset and ii) the novelty. To address the former, we have included experiments on Imagenet. The same outcomes as with the rest of the experiments are observed, i.e. RoC-GAN model outperforms cGAN counterpart in all cases. Regarding the latter (i.e. novelty), we have included bullet-point contributions in the revised text. In addition, we have written a detailed report in the general comments above elaborating on the novelty. \n\nIn addition to those two improvements, we have also included several other experimental improvements. For instance, we include a new comparison method as upper bound of both networks and new evaluation metrics (please check the general comments for details). Those changes have made our manuscript even stronger; as the reviewer recognized we have made an effort to include several experiments already. \n\nWe believe that the comments of the reviewer have made the manuscript stronger, therefore we would welcome any additional feedback. Especially, if the reviewer has any new concerns that have not been addressed before. ", "Thanks for the response. The points raised in my previous review are responded accordingly. I stand with my previous assessment and decision on the paper.", "2b) 'In addition, it is not clear why maximizing the variance can address the challenge of training longer path.': \n\nReducing the correlations of the weights has several advantages that are well-studied in both computer vision and machine learning. Several methods for decorrelating the weights has been used in deep networks, for instance [6-10]. \n\nAn intuitive idea about why we have included this loss: By reducing the correlations we encourage our method to explore different 'principal' directions in different layers, which is beneficial for training the network. Similar observations and experiments for the benefit of exploring different directions during training have been explored in [4-6].\n\n\n3) 'Covariance is computed for decov loss but it is not clear which layer’s representation is used to compute covariance.':\n\nWe have actually included the requested information in the original submission (sec. 4, page 7 in the revised manuscript).\n\n\n4) 'In Table 1, the improvement over baselines is small in case of sparse inpainting setting.': \n\nWe appreciate the comment; we have performed a similar analysis in sec D.2 (appendix). In short: the improvement in the additional noise experiments (sparse inpainting task) is not marginal but quite significant (up to 15%). Furthermore, we believe the experiments that we have conducted cover several cases and the results are always consistent, i.e. RoC-GAN *always* improve the baseline, while in the regions of more extreme noise or adversarial perturbations the difference is substantial. \n\n\n5) 'In Figure 4, the fourth row is more blurry than the third row although with less artifacts like black dots.': \n\nWe argue that the black dots the reviewer mentions make the images unrealistic. Such irregularities can have detrimental effect for higher level tasks accepting those images as input. Nevertheless, to demonstrate with quantitative metrics the difference, we have added a new metric for the experiments of faces. The metric is focused on the similarity of the identities of the facial images. We have measured the distance between the methods' outputs and the target images and prepared the cumulative plot. Please find the complete metric analysis in section D.4 in the appendix. \n\n\n[4] Jia, Kui et al. \"Improving training of deep neural networks via singular value bounding\", CVPR 2017.\n[5] Miyato, Takeru et al. \"Spectral normalization for generative adversarial networks\", ICLR 2018. \n[6] Bansal, Nitin et al. \"Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?\", Arxiv.\n[7] Cohen, Taco and Welling, Max \"Group equivariant convolutional networks\", ICML 2016.\n[8] Cogswell, Michael, et al. \"Reducing overfitting in deep networks by decorrelating representations.\", ICLR 2016.\n[9] Rodriguez, Pau et al. \"Regularizing cnns with locally constrained decorrelations\", Arxiv.\n[10] Ozay, Mete and Okatani, Takayuki \"Optimization on Submanifolds of Convolution Kernels in CNNs\", Arxiv.\n", "One of the crucial issues raised by the reviewers is the novelty of our approach. In this comment we address this general question and also outline the additional revisions (see also general response for the first part):\n\nWe argue that there is considerable novelty in our work; to our knowledge there has *not* been a study of robustness in dense regression. \n\nMachine learning is entering an era that is widely used in many diverse applications ranging from particle physics [1] to cyber-security [2] and from medical applications [3] to molecule generation [4]. In several applications and domains safety/robustness is critical, however the majority of the dense regression networks just report the best results ignoring the out of manifold or noise of real applications. Hence, we argue that robustness analysis should be introduced and performed in dense regression tasks. \n\nIn this context we restate our contributions: \n\n* We introduce RoC-GAN that performs conditional image generation by leveraging structure in the target space. Neither the model has emerged before, nor the context of our analysis.\n\n* We perform a robustness analysis through a series of experiments. We scrutinize the performance of the original cGAN and our model under different types of noise. We extend the adversarial perturbations in dense regression tasks. \n\n* We experimentally demonstrate how our method can be used with different architectures and tasks. We additionally show that RoC-GAN can be beneficial in semi-supervised learning task or how it performs with lateral connections from encoder to decoder.\n\nTo address the request of reviewer 2 for an alternative metric to SSIM, we compute a cumulative plot that measures the similarities of the identities in the face experiment. We employ the well-studied (and robust) recognition embeddings of FaceNet ([5]) to evaluate the similarity of the target image with the outputs of compared methods. For each pair of output and corresponding target image, we compute the cosine distance of their embeddings; the cumulative distribution of those distances is plotted. The plots illustrate that indeed RoC-GAN outputs are closer to the target images with respect to the identities. Please find the complete metric analysis in section D.4 in the appendix.\n\nWe consider that our work has become significantly stronger with the revised experimental results, thus we request the reviewers to reconsider their rating.\n\n\n[1] de Oliveira, Luke et al. \"Learning particle physics by example: location-aware generative adversarial networks for physics synthesis\", Computing and Software for Big Science 2017.\n[2] Ye, Guixin et al. \"Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach\", ACM SIGSAC Conference on Computer and Communications Security 2018.\n[3] Wei, Wen et al. \"Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training\", arxiv 2018.\n[4] De Cao, Nicola and Kipf, Thomas \"MolGAN: An implicit generative model for small molecular graphs\", arxiv 2018.\n[5] Schroff, Florian et al. \"Facenet: A unified embedding for face recognition and clustering\", CVPR 2015.", "We appreciate the constructive feedback of the reviewers. In response to their reviews, we have updated several sections in the paper:\n\n* We add an experiment on the ImageNet dataset, in both sparse inpainting and denoising (appendix, sec. D1).\n* We include a paragraph with the contributions in the introduction.\n* We train a method to demonstrate the maximum representational power of each network (this can be thought of as the upper bound of each experiment). Specifically, we train an adversarial autoencoder (in the target space) and utilize the reconstructed images for evaluation. \n* We include a visual example to illustrate how a projection in a linear subspace can promote the output to span the target subspace (appendix, sec. B). \n\nIn summary, we believe that the aforementioned results strengthen our claims and improve the paper.", "We appreciate the constructive feedback of the reviewer; in addition to the general comments on top that summarize our revisions, we answer to the reviewer's points below:\n\n1) 'The novelty is limited and not well explained.':\n\nEven though we might not have emphasized the novelty enough, we believe that our paper makes several contributions. We have pointed them out in the revised text (please consult the general comment above). \nThe essence of our contributions is studying robustness of cGAN in dense regression tasks. To the authors' knowledge this has not been studied at all for cGAN, i.e. a widely used framework for dense regression. If the reviewer has noticed it *anywhere*, we are happy to reconsider the positioning of our manuscript. \n\n\n1) 'Similar idea of using an autoencoder as another branch to help image generation has been proposed in Ma et al.’s work ([1])':\n\nWe disagree with the reviewer; the two works differ significantly in their use of AE:\n a) Ma et al. utilize an autoencoder (AE) with a different goal than RoC-GAN. They use three specialized AE to obtain the latent representations (embeddings) and not to leverage structure in the target (e.g. image) space. \n b) They devise a well-thought and heavily engineered pipeline for the task of person image generation (Fig. 2, 3 of their paper). Several of their modules, e.g. Region of interest boxes, are task-specific. Our goal is the extension of *any* cGAN to a more robust model. \n c) The AE in [1] is learned separately (and then is fixed), while the AE pathway in our work is not fixed, but *jointly* optimized with the regression pathway.\n d) Their loss functions are different. In particular, in [1] the authors include different losses in each step of their two-stage pipeline, while ours is a generic loss that can differ per task. \nIn that sense the two works are orthogonal by the use of AE. Despite the differences we consider that some ideas can be used to extend RoC-GAN, e.g. using a discriminator to match the latent representations (as done for the embeddings in [1]). We have added this as future work in the manuscript. \n\n\n2a) 'In the paper authors claim that skip connection makes it harder to train the longer path, which is kind of contradictory to what is commonly done in tasks of image classification, semantic segmentation and depth estimation. Can authors explain this claim?':\n\nTo the best of our knowledge, there has not been much study of how the longer path is optimized in cGAN setting. However, in the broader community of deep learning, several papers report the issue with the longer path training, please check [2-3]. As widely reported, the skip connections might help convergence, but they also enable the network to trivially copy the representations of previous layers and might shatter the meaningful representation learning in the longer path. \n\nA more intuitive explanation in our case: a trivial solution for the network would be to copy the representation to the decoder. The option of copying the meaningful representations through the shortcut is more 'attractive' in our case due to the latent loss Llat. If we do not regularize the longer path representations, the network has less incentive to learn meaningful representations in the longer path, which defeats the concept of including the latent loss. On the contrary, including by decorrelating the weights we encourage both pathways to learn meaningful representations in the longer path.\n\n[1] Ma, Liqian, et al. \"Disentangled Person Image Generation\", CVPR 2018.\n[2] Rasmus, Antti, et al. \"Semi-supervised learning with ladder networks.\", NIPS 2015.\n[3] Zhang, Yuting, et al. \"Augmenting supervised neural networks with unsupervised objectives for large-scale image classification\", ICML 2016.\n", "We thank the reviewer for recognizing the effort and contribution of our method. In addition to the general answer above, we answer each of the improvement points below: \n\n1) 'I agree adding another auto-encoder as a helper may give better generation results by spanning the whole target space, but I don't think this constraint is strong enough in practice. ': \n\nWe respectfully disagree with the reviewer; we have demonstrated in a series of experiments how this modification is beneficial. We provide the intuition, the synthetic experiment, a linear analogy analysis, and several experiments. \nIn the revision we add a visual example for the linear subspace (appendix, sec. B). We demonstrate how one corrupted and one clean image can have similar reconstructions from a PCA model. \n\n\n2) 'In section 3.3, the time complexity of computing '$L_deconv$' seems extremely large. From the perspective of numerical optimization, optimizing such a matrix will cause trouble if the dimension of weight matrices are large. i.e. optimizing the high-dimensional covariance matrix seems a problem to me.':\n\nIn the implementation details (section 4; page 7), we mention that we use $L_deconv$ in the output of the encoders. Those layers include tensors of $batch x 1 x 1 x channels$ where the number of channels is typically up to 1024. In our experiments for 4 layer network channels=512, for the 5 and 6 layer networks channels=768. Cogswell et al. ([1]) include an analysis for deeper networks. In practice, we have not noticed a significant computational burden, but this can be further explored in the future. \n\n\n3) 'The experiments look good. The experiments could be more convincing if using more complex data sets(e.g. CIFAR10, ImageNet) besides CelebA. My concern for using such data sets(the resolution of images is low and the distribution is simple) is that: although the noise seems to corrupt most of the image, the distribution of the image is not complex, so the generative model can recover it easily. Since this is a more empirical paper, the experiments should be more convincing.': \n\nWe have conducted the requested experiments on Imagenet (appendix, sec. D.1). The results are in line with those reported in the main paper -- particularly the natural scenes case and confirm the advantages of our method. That is, RoC-GAN outperform the cGAN in both denoising and sparse inpainting while the difference is increased when evaluated with additional noise. We note that the same hyper-parameters (as the rest of the paper) are used; additional tuning per experiment might be beneficial however for avoiding confusion all the hyper-parameters remain the same.\n\n\n4) 'Although the paper demonstrates the robustness of their model with different experiments, most of them were not performed on deep neural networks and complicated data sets.': \n\nIn the revised version we have included an experiment on Imagenet (appendix, sec. D.1). Our model is not architecture dependent. Specifically, RoC-GAN can be seen as a meta algorithm which can be used to augment any existing cGAN model to achieve additional robustness. We have made our best effort to demonstrate that with i) similar types of noise, ii) types of noise 'unseen' during training, iii) adversarial perturbations. \n\n5) 'Minor suggestion: Increase the resolution of the figures.':\n\nWe will add new figures to reflect the AAE method added and improve older ones the next few days. We appreciate the proposal.\n\n[1] Cogswell, Michael, et al. \"Reducing overfitting in deep networks by decorrelating representations.\", ICLR 2016.", "In addition to the general answer above, we answer each of the points raised below:\n\n1) 'The idea is simple and seems to be working. The methodological novelties seem more-or-less limited, but the theoretical analysis and the intuitive (and well-motivated) modification over CGANs add merits to the paper. ': \n\nWe thank the reviewer for the recognition of the work. \n\n\n2) 'The theoretical analysis of the method relates RoC-GAN to the original GAN, rather than CGAN! What is the connection here? If RoC-GAN is very similar to CGAN from a theoretical point of view (which it seems to be), then all the analysis to relate it to traditional GAN seem useless.': \n\nOn the contrary, we prove that our RoC-GAN shares the same theoretical properties as GAN; this can be seen as a sanity check, conforming that our method shares some beneficial theoretical properties with well-studied methods. \nSimilar proofs are provided in other extensions to cGAN, such as Zhe et al. ([1]).\n\n\n3) 'The extensive experiments in the supplementary material are appreciated. But the authors only compare their method with one single previous work (i.e., Rick Chang et al. (2017)), while there are several similar related works (either based on adversarial training strategies or simple denoising AEs).': \n\nThe goal of this work is not to propose a state-of-the-art network per se, but rather to present a method that is more robust to additional sources of noise. Our model is not architecture dependent. Specifically, RoC-GAN can be seen as a meta algorithm which can be used to augment any existing cGAN model to achieve additional robustness.\nWe scrutinize the robustness under:\ni) similar types of noise, \nii) types of noise not encountered during training, \niii) adversarial perturbations. \nIn addition to those, we also note that our method performs favorably when tested with samples similar as the training distribution. We have included an external method to illustrate that even strong performing networks can have difficulty in such tasks.\nHowever, if there are some specific works that the reviewers feel are particularly relevant, we are happy to evaluate their pre-trained models.\n\n\n4) 'Ablation studies can further show how each component of the model contributes to the final results. What if we were to only use the two-path generator without adversarial training? Different components of the final loss function can be removed and analyzed one at a time!':\n\nWe appreciate the reviewer's proposal; indeed in the synthetic experiment we optimize only the generators to simplify the problem; please see sec. 3.4. In addition, removing one by one the losses is performed in sec. E.2 (appendix). \nEven though our experiments are not exhaustive, we consider that we have covered a wide range of choices; those demonstrate the merits or trade-offs of our RoC-GAN.\n\n\n5) 'What are the conditions for mode-collapse for the proposed GAN? There are no discussions on this.': \n\nWe follow the same strategy as popular methods in cGAN ([2], [3]). We agree with the reviewer that mode collapse is significant especially in original GAN training, however there are other works tackling this issue, e.g. [4-6]. \n\n\n[1] Gan, Zhe, et al. \"Triangle generative adversarial networks.\", NIPS 2017.\n[2] Isola, Phillip et al. \"Image-to-Image Translation with Conditional Adversarial Networks\", CVPR 2017.\n[3] Zhu, Jun-Yan et al. \"Toward multimodal image-to-image translation\", NIPS 2017.\n[4] Che, Tong et al. \"Mode Regularized Generative Adversarial Networks\", ICLR 2017.\n[5] Anonymous, \"Generative Adversarial Network Training is a Continual Learning Problem\", under review ICLR 2019.\n[6] Anonymous, \"DISTRIBUTIONAL CONCAVITY REGULARIZATION FOR GANS\", under review ICLR 2019.", "This manuscript proposes a robust version of conditional GAN (named RoC-GAN) that leverage the intrinsic structure in the output space. To achieve robustness, the authors replace the single pathway in the generator with two different pathways that partially share weights. The authors study the theoretical properties of RoC-GAN and prove that it shares the same properties as the vanilla GAN. For quantitative evaluations, the authors use two datasets of natural scenes and faces and evaluate denoising and sparse inpainting using the SSIM metric.\n-\tThe idea is simple and seems to be working. The methodological novelties seem more-or-less limited, but the theoretical analysis and the intuitive (and well-motivated) modification over CGANs add merits to the paper. \n-\tThe theoretical analysis of the method relates RoC-GAN to the original GAN, rather than CGAN! What is the connection here? If RoC-GAN is very similar to CGAN from a theoretical point of view (which it seems to be), then all the analysis to relate it to traditional GAN seem useless.\n-\tThe extensive experiments in the supplementary material are appreciated. But the authors only compare their method with one single previous work (i.e., Rick Chang et al. (2017)), while there are several similar related works (either based on adversarial training strategies or simple denoising AEs).\n-\tAlso, ablation studies can further show how each component of the model contributes to the final results. What if we were to only use the two-path generator without adversarial training? Different components of the final loss function can be removed and analyzed one at a time!\n-\tWhat are the conditions for mode-collapse for the proposed GAN? There are no discussions on this.\n" ]
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "HklhCxV2a7", "S1gdUGDQAm", "iclr_2019_Byg0DsCqYQ", "iclr_2019_Byg0DsCqYQ", "Hkl9noywp7", "HkewAY1Pam", "B1x_dKfthX", "iclr_2019_Byg0DsCqYQ", "iclr_2019_Byg0DsCqYQ", "B1x_dKfthX", "SygbgUr527", "SyeeXquTh7", "iclr_2019_Byg0DsCqYQ" ]
iclr_2019_Byg5QhR5FQ
Top-Down Neural Model For Formulae
We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true. The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner. The results of this network are then processed by two recurrent neural networks. One of the interesting aspects of our model is how propositional atoms are treated. For example, the model is insensitive to their names, it only matters whether they are the same or distinct.
accepted-poster-papers
This paper presents a method for building representations of logical formulae not by propagating information upwards from leaves to root and making decisions (e.g. as to whether one formula entails another) based on the root representation, but rather by propagating information down from root to leaves. It is a somewhat curious approach, and it is interesting to see that it works so well, especially on the "massive" train/test split of Evans et al. (2018). This paper certainly piques my interest, and I was disappointed to see a complete absence of discussion from reviewers during the rebuttal period despite author responses. The reviewer scores are all middle-of-the-road scores lightly leaning towards accepting, so the paper is rather borderline. It would have been most helpful to hear what the reviewers thought of the rebuttal and revisions made to the paper. Having read through the paper myself, and through the reviews and rebuttal, I am hesitantly casting an extra vote in favour of acceptance: the sort of work discussed in this paper is important and under-represented in the conference, and the results are convincing. I however, share the concerns outlined by the reviewers in their first (and only) set of comments, and invite the authors to take particular heed of the points made by AnonReviewer3, although all make excellent points. There needs to be some further analysis and explanation of these results. If not in this paper, then at least in follow up work. For now, I will recommend with medium confidence that the paper be accepted.
train
[ "B1lJNSp7JV", "BJexSD_PnQ", "SJlJykpRAQ", "H1gP6yjq0m", "rkxMYQi9A7", "HyeZl-j507", "H1xTZE693X", "HkeTjz3wnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I've read the new version of the paper and the comments of other reviewers and I've decided to increase my score.", "In this paper the authors propose a neural model that, given a logical formula as input, predicts whether the formula is a tautology or not. Showing that a formula is a tautology is important because if we can classify a formula A -> B as a tautology then we can say that B is a logical consequence of A. The structure of the formula is a feedforward neural network built in a top-down manner. The leaves of this network are vectors (each of them represents a particular occurrence of an atom) which, after the construction of the formula, are processed by some recurrent neural networks.\n\nThe proposed approach seems interesting. However, my main doubt concerns the model. It seems to outperform the state-of-the-art, but the authors do not give any explanations why. There is no theoretical or intuitive explanation of why the model works. Why we need RNNs and not feedforward NNs? I think this is an big issue.\nIn conclusion, I think that the paper is a bit borderline. The model should be better explained. However, I think that the approach is compelling and, after a minor revision, the paper could be considered for acceptance.\n\n[Minor comments]\nPage 4. \n“The dataset contains train (99876 pairs)”, pairs of what?\n\nPage 5. \nWhat is the measure of the values reported in Table 1? Precision? \n", "Thank you, reviewer 1, for your review. I appreciate and understand your position regarding the lack of explanation for the model's performance. However, our field is primarily empirical, and it is common for engineering-oriented papers to produce such results which will only be properly understood and explained by further work. The literature is rife with examples, from GANs to regularization tricks for RNNs. You must ask yourself: are the results sufficiently believable? is the study conducted rigorously? and have the authors attempted to explain and discuss them to a reasonable extent? Please read the author response, revisions to the paper, and be prepared to reconsider your assessment or provide further justification as to why you stand by your current score, if that is what you choose to do.", "Thank you for your comments. Indeed, the question why and when a top-down model outperforms a bottom-up model is crucial. However, as you have pointed out, it is likely a difficult question to answer. A new Section 3.1 was added to a revised version of the paper, where the inner working of the model is briefly analyzed. A top-down model was also tested on formulae from another dataset. Although the results are hard to compare directly, it seems that the model does not exploit just one particular dataset. Similarly, we can reformulate a TAUT-problem as a SAT-problem by taking the negation of formula. The results remain similar on the dataset from Evans et al., however, this is hardly surprising, because the problem remains essentially the same from the point of view of a top-down approach.\n\nAll your minor comments were incorporated into a revised version of the paper.", "Thank you for your comments. You are right that the inner working of the model is unclear. A brief Section 3.1 was added to a revised version of the paper, where the produced models are shortly analyzed. Hopefully, it sheds some light on the model.\n\nConcerning your second point, RNNs are used because they fit nicely into the model. It makes it possible to have potentially unlimited number of occurrences of an atom and the number of distinct atoms in a formula, which is a nice feature of the model. Hence the model can evaluate formulae that contain more atoms than formulae used for training. It is possible to use feedforward NNs, but it seems that we then mimic the unfolding of RNNs.\n\nBoth your minor comments were incorporated into a revised version of the paper.", "Thank you for your comments. A new Section 3.1 was added to a revised version of the paper, where the inner working of the model is briefly discussed. Although it is definitely far from being conclusive, it, hopefully, sheds some light on the model.\n\nYour description (point 7) of how the model can possible work corresponds to the idea behind the model as described in Section 2 and discussed in new Section 3.1. An interesting point in your text is that values may change their positions in lists of truth values. In fact, something like that can actually happen, but so far, it is really unclear how to do this, because such changes have to be (almost) consistent through the whole model. Moreover, to make things even more complicated, different atoms occur at different levels (their depth) in a formula.\n\nYou are right (point 2) that the model, in its current form, cannot produce suitable vector encodings of propositions. For example, the model is invariant to the renaming of atoms. However, for formulae where this is no longer an issue, e.g., sentences in FOL, it is possible to imagine such interpretations even using a top-down approach.", "In this paper, the authors provide a new neural-net model of logical formulae. The key feature of the model is that it gathers information about a given formula by traversing its parse tree top-down. One neural net of the model traverses the parse tree of the formula from the root all the down toward the leaves, and generates vectors for the leaves of the tree. Then, another RNN-based neural net collects these generated vectors, and answers a query asked for the formula, such as logical entailment. When experimented with Evans et al.'s data set for logical entailment queries, the authors' model outperforms existing models that encode formulae by traversing their parse trees bottom-up.\n\nI found the idea of traversing a parse tree of a formula top-down and converting it to a vector very interesting. It is also good to know that the idea leads to a competitive model for at least one dataset. \n\nHowever, I am hesitant to be a strong supporter for this paper. I feel that the cons and pros of the model and its design decisions are not fully analyzed or explained in the paper; when reading this paper, I wanted to learn a rule of thumb for deciding when (and why if so) a top-down model of logical formulae works better than a bottom-up model. I understand that what I ask for is very difficult to answer, but experiments with more datasets and different types of queries (such as satisfiability) might have made me happier.\n\nHere are some minor comments.\n\n* Abstract: I couldn't quite understand your point about atoms. According to Figure 1, there is a neural net for each propositional symbol, and this means that your model tracks information about which occurrences of propositional symbols are about the same one. Is your point about the insensitivity of your model to a specific name given to each symbol? \n\n* p1: this future ===> this feature\n\n* p2: these constrains ===> these constraints\n\n* p2: recursively build model ===> recursively built model\n\n* p2: Change the font of R in the codomain of ci.\n\n* p3: p1 at the position of ===> p1 is at the position of\n", "Cons\n\n1.\tThere is no study of the representations developed by the model, which is unfortunate because this is a conference on learning representations and because there is little light shed on how the network achieves its rather high level of performance.\n2.\tIt seems less generally useful to have such a special-purpose network for computing global properties like tautologicality than to have a network that produces actual vector encodings of propositions, as typical of the bottom-up tree-structured models.\n\nPros\n\n3.\tThe paper is quite clear.\n4.\tThe problem is important.\n5.\tThe paper pursues the familiar path of a tree-structured network isomorphic to the parse tree of a propositional-calculus formula, but with the original twist of passing information top-down rather than bottom-up.\n6.\tThe results are impressively strong. In particular, it improves by 10% absolute over the special-purpose and highly performant PossibleWorldNet on the most difficult category of problems, the ‘massive’ category, achieving 83.6% accuracy.\n\nPro/Con mix\n\n7.\tAlthough the paper did not provide much insight into what was going on in the network to allow it to perform well (point 1 in ‘Cons’), I was able to convince myself I could understand a way the architecture *could* succeed (whether this possible approach matches the actual processing in the model I have no way of assessing). In brief, the vector that is passed down the network can be thought of as a list of truth values across multiple possible worlds of the tree node at which the vector resides. To search for a counterexample to tautologicalhood, the original input vector to the root node could be the zero (false) vector. If the kth value in the vector at a parent node labeled ‘or’ is 0 (the disjunction is false in world k) then in the two children the kth value must also be 0. If the kth value of the vector at an XOR node is 0, the kth value of the two children must both be 0 or both be 1; actually these values need not reside in position k so the children could both have value 0 at some position i and both have value 1 at another position j. Then in the RNN-Var component of the network, which checks for consistency across multiple tokens of the same proposition variable, each position k in all vectors for the same variable can be checked for equality, producing a value 1 in the output vector if all have value 1, producing 0 if all have value 0, and producing value -1 if the values do not all agree. Then RNN-All checks across all vectors for proposition variable types to see if there’s a position k in which no value -1 occurs; if so, the values of the variable vectors at position k give the truth values for all variables such that the overall proposition has the desired value 0: a counterexample exists. If no such position k exists, the proposition is a tautology. This seems roughly right, at least." ]
[ -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, 2, -1, -1, -1, -1, 3, 4 ]
[ "SJlJykpRAQ", "iclr_2019_Byg5QhR5FQ", "BJexSD_PnQ", "H1xTZE693X", "BJexSD_PnQ", "HkeTjz3wnQ", "iclr_2019_Byg5QhR5FQ", "iclr_2019_Byg5QhR5FQ" ]
iclr_2019_BygANhA9tQ
Cost-Sensitive Robustness against Adversarial Examples
Several recent works have developed methods for training classifiers that are certifiably robust against norm-bounded adversarial perturbations. These methods assume that all the adversarial transformations are equally important, which is seldom the case in real-world applications. We advocate for cost-sensitive robustness as the criteria for measuring the classifier's performance for tasks where some adversarial transformation are more important than others. We encode the potential harm of each adversarial transformation in a cost matrix, and propose a general objective function to adapt the robust training method of Wong & Kolter (2018) to optimize for cost-sensitive robustness. Our experiments on simple MNIST and CIFAR10 models with a variety of cost matrices show that the proposed approach can produce models with substantially reduced cost-sensitive robust error, while maintaining classification accuracy.
accepted-poster-papers
This paper studies the notion of certified cost-sensitive robustness against adversarial examples, by building from the recent [Wong & Koller'18]. Its main contribution is to adapt the robust classification objective to a 'cost-sensitive' objective, that weights labelling errors according to their potential damage. This paper received mixed reviews, with a clear champion and two skeptical reviewers. On the one hand, they all highlighted the clarity of the presentation and the relevance of the topic as strengths; on the other hand, they noted the relatively little novelty of the paper relative [W & K'18]. Reviewers also acknowledged the diligence of authors during the response phase. The AC mostly agrees with these assessments, and taking them all into consideration, he/she concludes that the potential practical benefits of cost-sensitive certified robustness outweight the limited scientific novelty. Therefore, he recommends acceptance as a poster.
train
[ "HJg7nLR0n7", "Byx0aQAY0X", "ByxW5QAYCX", "B1xUE7Rt0m", "HJxSIK7VRX", "rJgLQKrMRX", "ryxEPIvKhm", "ByeXwZZzCm", "HJlIr-JG0X", "SygPPR_l0X", "B1e-WxMi6Q", "HklCJkMs6X", "SklzwA-oaQ", "H1llJV-z6X", "B1lNPh9c3Q" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors define the notion of cost-sensitive robustness, which measures the seriousness of adversarial attack with a cost matrix. The authors then plug the costs of adversarial attack into the objective of optimization to get a model that is (cost-sensitively) robust against adversarial attacks.\n\nThe initiative is novel and interesting. Considering the long history of cost-sensitive learning, the proposed model is rather ad-hoc for two reasons:\n\n(1) It is not clear why the objective should take the form of (3.1). In particular, if using the logistic function as a surrogate for 0-1 loss, shouldn't the sum of cost be in front of \"log\"? If using the probability estimated from the network in a Meta-Cost guided sense, shouldn't the cost be multiplied by the probability estimate (like 1/(1+exp(...))) instead of the exp itself? The mysterious design of (3.1) makes no physical sense to me, or at least other designs used in previous cost-sensitive neural network models like\n\nChung et al., Cost-aware pre-training for multiclass cost-sensitive deep learning, IJCAI 2016\nZhou and Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem, TKDE 2006 (which is cited by the authors)\n\nare not discussed nor compared.\n\nUpdate: I thank the authors for providing updated information in the Appendix discussing about other alternatives. While I still think it worth comparing with other approaches (as it is still not clear whether Khan's approach is regarded as state-of-the-art for *general* cost-sensitive deep learning), I think the authors have sufficiently justified their choice.\n\n(2) It is not clear why the perturbed example should take the cost-sensitive form, while the original examples shouldn't (as the original examples follow the original loss). Or alternatively, if we optimize the original examples by the cost-sensitive loss, would it naturally achieve some cost-sensitive robustness (as the model would naturally make it harder to make high-cost mistakes)? Those issues are yet to be studied.\n\nUpdate: I thank the authors for providing additional experiments on this part.\n", "Thank you for your consideration, we have included our discussions on the choices of cost matrices in Appendix D. ", "Madry et al., (2018) is based on robust training against adversarially generated images devised via PGD attacks, which is not targeted for certifiable robustness. Thus, investigation on how to make PGD-based robust training cost-sensitive is beyond the scope of our work. \n\nTo the best of our knowledge, before the submission of our paper there are only two proposed certifiable robust training methods: one is Wong & Kolter (2018) and the other one is [1]. Compared with Wong & Kolter (2018), [1] is only applicable to neural networks with two layers, thus we focus our experiments on Wong & Kolter (2018) that is more general. Recently, [2] extends the method of [1] to arbitrary number of neural network layers, thus it would be interesting to study whether our approach is applicable to the robust model developed in [2]. \n\nReference:\n[1] Raghunathan, et al., Certified Defenses against Adversarial Examples. https://arxiv.org/abs/1801.09344\n[2] Raghunathan, et al., Semidefinite relaxations for certifying robustness to adversarial examples. https://arxiv.org/abs/1811.01057\n", "1. Comparison with other alternatives\n(a) We have added an equivalent form of the cost-sensitive CE loss for standard classification as in (B.1) in Appendix. The derivation of (B.1) simply follows the definition of the cross entropy loss and the modified softmax outputs y_n as defined in (11) of [1]. Our robust classifier is basically applying the same techniques on the guaranteed robust bound to induce cost-sensitivity for the adversarial setting.\n\n(b) Indeed, [1] introduces other cost-sensitive loss including MSE loss and SVM hinge loss besides the cost-sensitive CE loss. However, they only evaluated the cost-sensitive CE loss in their experiments, as argued in [1] that CE loss usually performs best among the three loss functions for multiclass image classification. Thus, we consider cost-sensitive robust optimization based on CE loss as the most promising approach.\n\n(c) We have cited [1] in Section 3.2 of the main paper in the revised pdf.\n\n(d) Please refer to Appendix C for the discussions of existing related work on cost-sensitive learning with neural networks, and explanations on why choosing to incorporate cost information into the cross entropy loss, instead of other loss functions.\n\n(e) The proposed robust training objective adapts cost-sensitive CE loss to the adversarial setting, and the cost-sensitive CE loss is aligned with the idea of minimizing the Bayes risks (see equation (1) in MetaCost). More specifically, it is proved in Lemma 10 of [1] that the cost-sensitive CE loss is c-calibrated, or more concretely, there exists an inverse relationship between the optimal CNN output and the Bayes cost of the t-th class.\nTherefore, minimization of the cost-sensitive CE loss will lead to classifier that has risks closer to the optimal Bayes risks.\n\n2. As requested, we have added in Appendix C to survey related works on cost-sensitive learning for non-adversarial settings and explain the reasoning behind the techniques we choose.\n\n3. Quoting from the reviewer, “What happens if the original examples are also evaluated/optimized cost-sensitively”, if you are referring to robustness of standard cost-sensitive learning method, this is what the experiment in Appendix B.3 tests. The results show naive cost-sensitive learning does not lead to cost-sensitive robustness.\n\nReference\n[1]. Khan, et al., Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data. https://arxiv.org/abs/1508.03422", "Thank you for providing the feedback to clarify my concerns on novelty and experiment. There is no denying that cost-sensitive adversarial learning is an interesting topic worth exploring. I appreciate the efforts the authors put to introducing and evaluating a cost-sensitive extension of a robust DL model by Wong & Kolter (2018). However, what I found most frustrating about this work is its strong A-plus-B flavor which I still don’t think can stand for a novel scientific paper. Moreover, the adversarial learning part of the current method builds largely on (Wong & Kolter 2018). This looks somewhat narrow given the fairly broad rage of paper title and claims. I suggest including one or two additional certified robust learning models (e.g., PGD by Madry et al., 2018) into the proposed framework to better justify the importance of cost-sensitive robust learning. ", "Thanks to the authors for clarifying.\n\nFor point 1, I understand why the authors believe that their (3.1) is not ad hoc. Nevertheless, the authors' answers actually justify that (3.1) is perhaps not well-compared with other alternatives. \n(a) (3.1) does not look the same from the cost-sensitive CE loss in [1], so why using (3.1) instead of [1] is a question mark.\n(b) [1] contains more loss than the cost-sensitive CE loss, and why using a variant from cost-sensitive CE loss is another question mark.\n(c) Even if (3.1) is a variant of cost-sensitive CE loss in [1], it hasn't been cited in this paper anyway?\n(d) Quoting the authors, \"transformations that induce larger cost will receive larger penalization by minimizing the cost-sensitive CE loss\", but there are many different functions that achieve the property, including many discussed in other papers. Why or why not choosing (3.1)?\n(e) Maybe it is because the derivation in Section 3 is way too short. But I fail to see how the authors follow MetaCost to \"multiply the probability estimates by the cost, but the result vector has to be normalized before plugging into the cross entropy loss\" and get (3.1). More detailed derivations are needed.\n\nI respectfully disagree with the authors' point on 2, as (IMHO) the authors are not using a state-of-art cost-sensitive objective (MetaCost is clearly outdated as evidenced by dozens of papers, and even [1] is just for the imbalanced setting, not for general cost-sensitivity that the authors want to achieve). So the current paper is like \"adversarial learning + some cost-sensitive objective\" gets better performance in the cost-sensitive setting. But cost-sensitive learning is a field that has been studied for more than 20 years. Why should we stick to \"some cost-sensitive objective\" but not \"good/state-of-art cost-sensitive objective\" when introducing cost-sensitivity to the adversarial setting? At least I demand to see a complete literature review on the cost-sensitive side (for the non-adversarial setting) and see the reasoning of the authors on the techniques that they choose to introduce to the adversarial learning field.\n\nI can accept the authors explanations on points 3 and 4, but still feel that it can be good to see what happens if the original examples are also evaluated/optimized cost-sensitively.\n\n\n", "** review score incremented following discussion below **\n\nStrengths:\n\nWell written and clear paper\nIntuition is strong: not all source-target class pairs are as beneficial to find adversarial examples for \n\nWeaknesses:\n\nCost matrices choices feel a bit arbitrary in experiments\nCIFAR experiments still use very small norm-balls\n\nThe submission builds on seminal work by Dalvi et al. (2004), which studied cost-sensitive adversaries in the context of spam detection. In particular, it extends the approach to certifiable robustness introduced by Wong and Kolter with a cost matrix that specifies for each pair of source-target classes whether the model should be robust to adversarial examples that are able to take an input from the source class to the target (or conversely whether these adversarial examples are of interest to an adversary).\n\nWhile the presentation of the paper is overall of great quality, some elements from the certified robustness literature could be reminded in order to ensure that the paper is self-contained. For instance, it is unclear how the guaranteed lower bound is derived without reading prior work. Adding this information in the present submission would make it easier for the reader to follow not only Sections 3.1 and 3.2 but also the computations behind Figure 1.b. \n\nThe experiments results are clearly presented but some of the details of the experimental setup are not always justified. If you are able to clarify the following choices in your rebuttal, this would help revise my review. First, the choice of cost matrices feels a bit arbitrary and somewhat cyclical. For instance, binary cost matrices for MNIST are chosen according to results found in Figure 1.b, but then later the same bounds are used to evaluate the performance of the approach. Yet, adversarial incentives may not be directly correlated with the “hardness” of a source-target class pair as measured in Figure 1.b. The real-valued cost matrices are better justified in that respect. Second, would you be able to provide additional justification or analysis of the choice of the epsilon parameter for CIFAR-10? For MNIST, you were able to improve the epsilon parameter from epsilon=0.1 to epsilon=0.2 but for CIFAR-10 the epsilon parameter is identical to Wong et al. Does that indicate that the results presented in this paper do not scale beyond simple datasets like MNIST?\n\nMinor comments:\n\n\nP2: The definition of adversarial examples given in Section 2.2 is a bit too restrictive, and in particular only applies to the vision domain. Adversarial examples are usually described as any test input manipulated by an adversary to force a model to mispredict.\nP3: typo in “optimzation” \nP5: trade off -> trade-off \nP8: the font used in Figure 2 is small and hard to read when printed.\n", "Thank you, the reduced effectiveness of the approach in settings where adversarial incentives do not align with class-pair hardness is the limitation I was concerned about in my previous comment. It would be great to add some elements of this discussion to the paper because your comments would help readers better understand cost matrices, as well as their applicability. \n\nI will increase my review score by one to take into account the outcome of this discussion.", "We don’t see any intrinsic reason why the class transformation difficulty is correlated with adversarial value, but the actual value and difficulty should depend on the application. The results in Table 1 show that the cost-sensitive robustness can harden both “easy” (4->9, robust error reduces from 10.08% to 1.02%) and “hard” (0->2, robust error reduces from 0.92% to 0.38%) - the improvement is bigger for the “easy” transformation, but even after the cost-sensitive robustness hardening, it remains slightly “easier” than the “hard” transformation in the overall robust model. \n\nFor the MNIST classes, there is no correlation between the adversarial value (in the toy check fraud motivation) and transformation difficulty, since adversarial value is directional and semantically different digits can look more similar than far apart ones. For a more realistic security application, it would be desirable to define the classes in such a way that the valuable adversarial transformations are also the hardest ones to achieve.\n", "Thank you for taking the time to write a response to my review. \n\nRegarding 1., the explanation does provide some useful context for the choice of cost matrices. Do you have an intuition as to whether adversarial incentives will always correlate with transformation hardness? In other words, could there exist settings where the adversary would benefit more from a change in class that is relatively easy to make (and hard to defend against) compared to other class pairs? \n\nThank you for providing additional experimental results regarding 2. ", "We’ve added an Appendix B.3 to the revised paper that addresses the question you raised about whether standard cost-sensitive loss trained on original examples would improve cost-sensitive robustness. The results from our experiments show that standard cost-sensitive loss does not result in a classifier with cost-sensitive robustness.", "We hope the following explanations address your questions:\n\n1. Regarding the choice of the cost matrices\nOur goal in the experiments was to evaluate how well a variety of different types of cost matrices can be supported. MNIST and CIFAR-10 are toy datasets, thus defining cost matrices corresponding to meaningful security applications for these datasets is difficult. Instead, we selected representative tasks and designed cost matrices to capture them. Our experimental results show the promise of the cost-sensitive training method works across a variety of different types of cost matrices, so we believe it can be generalized to other cost matrix scenarios that would be found in realistic applications.\n\nIt is a good point that the cost matrices that were selected based on the robust error rates in Fig 1B are somewhat cyclical, but it does not invalidate our evaluation. We use the “hardness” of adversarial transformation between classes only for choosing representative cost matrices, and the robust error results on the overall-robustness trained model as a measure for transformation hardness. Further, the transformation hardness implied by the robust error heatmap is generally consistent with intuitions about the MNIST digit classes (e.g., “9” and “4” look similar so are harder to make robust to transformation), as well as with the visualization results produced by dimensional reduction techniques, such as t-SNE [1]. \n\n2. Regarding the choice of epsilon for CIFAR-10\nIn our CIFAR-10 experiments, we set epsilon=2/255, the same experimental setup as in [2]. Our proposed cost-sensitive robust classifier can be applied to larger epsilon for CIFAR-10 dataset, and similar improvements have been observed for different epsilon settings. In particular, we have run experiments on CIFAR-10 with epsilon varying from {2/255, 4/255, 6/255} for the single seed task. The comparison results are reported in Figure 5(b), added to the revised PDF. These results support the generalizability of our method to larger epsilon settings.\n\n[1] Maaten and Hinton, Visualizing Data using t-SNE. http://www.jmlr.org/papers/v9/vandermaaten08a.html\n[2] Wong, et al., Scaling Provable Adversarial Defenses. https://arxiv.org/abs/1805.12514\n", "Thank you for your review. Please see our responses below.\n\n1. Concern regarding the novelty\nThe review correctly notes that the method we use to achieve cost-sensitive robustness is a straightforward extension to the training procedure in Wong & Kolter (2018). The novelty of our paper lies in the introduction of cost-sensitive robustness as a more appropriate criteria to measure classifier’s performance, and in showing experimentally that the cost-sensitive robust training procedure is effective. Previous robustness training methods were designed for overall robustness, which does not capture well the goals of adversaries in most realistic scenarios. We consider it an advantage that our method enables cost-sensitive robustness to be achieved with straightforward modifications to overall robustness training.\n\n2. Limitation in data scale\nWe agree with the reviewer that certified robustness methods, including our work, are a long way from scaling to interesting models. All previous work on certified adversarial defenses has been limited to simple models on small or medium sized datasets (e.g., [1-3] below), but there is growing awareness that non-certified defenses are unlikely to resist adaptive adversaries and strong interest in scaling these methods. The method we propose and evaluate for incorporating cost-sensitivity in robustness training is generic enough that we expect it will also work with most improvements to certifiable robustness training. So, even though our implementation is not immediately practical today, we believe our results are of scientific interest, and the methods we propose are likely to become practical as rapid progress continues in scaling certifiable defenses. \n\n\n[1] Wong and Kolter, Provable defenses against adversarial examples via the convex outer adversarial polytope. https://arxiv.org/abs/1711.00851\n[2] Raghunathan, et al., Certified Defenses against Adversarial Examples. https://arxiv.org/abs/1801.09344\n[3] Wong, et al., Scaling Provable Adversarial Defenses. https://arxiv.org/abs/1805.12514\n", "Thank you for your review. Your comments about the model being ad hoc stem from a few misunderstandings, which we hope to clarify:\n\n1. Justification of training objective (3.1)\nThe design of (3.1) is not ad hoc, but follows from previous cost-sensitive learning work such as MetaCost, and is inspired by the cost-sensitive CE loss (see equation (10) of [1] for a detailed definition). To be specific, class probabilities for cost-sensitive CE loss are computed by multiplying the corresponding cost and then normalizing the result vector. As a result, transformations that induce larger cost will receive larger penalization by minimizing the cost-sensitive CE loss. We neglected to include this explanation in the paper, and will revise it to make this clear. \n\nFor the first question, moving the sum of cost in front of “log” is unreasonable because the loss for each seed example will not be a negative log-likelihood term as in the case of cross-entropy. We can check the sanity of the objective by examining whether it reduces to standard CE loss if we set C = 1*1^\\top-I. For the second question, we indeed multiply the probability estimates by the cost, but the result vector has to be normalized before plugging into the cross entropy loss. Thus, the sum of cost will appear in front of the “exp” term.\n\n2. Comparison with other alternative designs\nThe cost-sensitive neural network models you mentioned are only demonstrated to be effective in the non-adversarial settings, whereas we show that our proposed classifier is effective in the adversarial setting. Thus, comparing our method with theirs is not appropriate, since it is unclear whether such alternative cost-sensitive models can be adapted and remain effective in the adversarial setting. Even if they can be adapted, it is still not the main focus of our paper, as our main goal is to show that our proposed classifier achieves significant improvements in cost-sensitive robustness in comparison with models trained for overall robustness.\n\n3. Why are the original examples are not in cost-sensitive form?\nThe training objective (3.1) is constructed for maximizing both cost-sensitive robustness and standard classification accuracy, and allows us to use the alpha hyperparameter to control the weighting between these goals. Thus, the first term in (3.1) doesn’t involve cost-sensitivity. We regard the standard classification accuracy as an important criteria for measuring classifier performance. Besides, the cost matrix for misclassification of original examples might be different from the cost matrix of adversarial transformations. For instance, misclassifying a benign program as malicious may still induce some cost in the non-adversarial setting, whereas the adversary may only benefit from transforming a malicious program into a benign one. In a scenario where the model is cost-sensitive regardless of adversaries, it could make sense to incorporate a cost-sensitive loss function as the first term also, but we have not explored this and are focused on the adversarial setting where cost-sensitivity is with respect to adversarial goals.\n\n4. What if we only optimize the original examples by cost-sensitive loss\nGiven the vulnerability of deep learning classifiers against adversarial examples, we highly doubt that if we only optimize the original training by the cost-sensitive loss it would achieve significant cost-sensitive robustness (this expectation is based on how poorly models trained with the goal of overall accuracy do at achieving overall robustness). To be more convincing, we are running an experiment to test the robustness of a standard cost-sensitive classifier and will post the results soon.\n\nReference\n[1]. Khan, et al., Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data. https://arxiv.org/abs/1508.03422\n", "The paper introduces a new concept of certified cost-sensitive robustness against adversarial attacks. A cost-sensitive robust optimization formulation is then proposed for deep adversarial learning. Experimental results on two benchmark datasets (MNIST, CIFAR-10) are reported to show the superiority of the proposed method to overall robustness method, both with binary and real-value cost matrices. \n\nThe idea of cost-sensitive adversarial deep learning is well motivated. The proposed method is clearly presented and the results are easy to access. My main concern is about the novelty of the approach which looks mostly incremental as a rather direct extension of the robust model (Wong & Kolter 2018) to cost-sensitive setting. Particularly, the duality lower-bound based loss function and its related training procedure are almost identical to those from (Wong & Kolter 2018), up to certain trivial modification to respect the pre-specified misclassification costs. The numerical results show some promise. However, as a practical paper, the current empirical study appears limited in data scale: I believe additional evaluation on more challenging data sets can be useful to better support the importance of approach. \n\nPros: \n\n- The concept of certified cost-sensitive robustness is well motivated and clearly presented.\n\nCons:\n\n- The novelty of method is mostly incremental given the prior work of (Wong & Kolter 2018).\n- Numerical results show some promise of cost-sensitive adversarial learning in the considered settings, but still not supportive enough to the importance of approach.\n\n" ]
[ 5, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_BygANhA9tQ", "ByeXwZZzCm", "HJxSIK7VRX", "rJgLQKrMRX", "SklzwA-oaQ", "H1llJV-z6X", "iclr_2019_BygANhA9tQ", "HJlIr-JG0X", "SygPPR_l0X", "HklCJkMs6X", "HJg7nLR0n7", "ryxEPIvKhm", "B1lNPh9c3Q", "HJg7nLR0n7", "iclr_2019_BygANhA9tQ" ]
iclr_2019_BygfghAcYX
The role of over-parametrization in generalization of neural networks
Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.
accepted-poster-papers
I agree with the reviewers that this is a strong contribution and provides new insights, even if it doesn't quite close the problem. p.s.: It seems that centering the weight matrices at initialization is a key idea. The authors note that Dziugaite and Roy used bounds that were based on the distance to initialization, but that their reported numerical generalization bounds also increase with the increasing network size. Looking back at that work, they look at networks where the size increases by a very large factor (going from e.g. 400,000 parameters roughly to over 1.2 million, so a factor of 2.5), at the same time the bound increases by a much smaller factor. The type of increase also seems much less severe than those pictured in Figures 3/5. Since Dzugate and Roy's bounds involved optimization, perhaps the increase there is merely apparent.
train
[ "Sygwq4iZC7", "rklmVdS_AX", "BJeOrBBOCm", "SkgLfnBiTm", "SkezHFcITm", "SJeStEOcTm", "HklzMNu9TX", "rkeQQzTK3m" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "The authors aim to shed light on the role of over-parametrization in generalization error. They do so for the special case of 2 layer fully connected ReLU networks, a \"simple\" setting where one still sees empirically that the test error decreasing as over-parametrization increases.\n\nBased on empirical observations of norms (and norms relative to initialization) in trained overparametrized networks, the authors are led to the definition of a new norm-bounded class of neural networks. Write u_i for the vector of weights incoming to hidden node i. Write v_i for the weights outgoing from hidden node i. They study classes where the Euclidean norm of v_i is bounded by a constant alpha_i and where the Euclidean norm of u_i - u^0_i is bounded by beta_i, where u^0_i is the value of u_i after random initialization. Call this class F_{alpha,beta} where alpha,beta are specific vectors of bounds.\n\nThe main result is a bound on the empirical Rademacher complexity of F_{alpha,beta}. \nThe authors also given lower bounds on the empirical Rademacher complexity for carefully chosen data points, showing that the bounds are tight. These Rademacher bounds yield standard bounds on the ramp loss for fixed alpha,beta, and margin, and then a union bound argument extends the bound to data-dependent alpha,beta and margin.\n\nThe authors compare the bounds to existing norm-based bounds in the literature. The basic argument is that the terms in other bounds tend to grow as networks get much larger, while their terms shrink. Note that at no point are the bounds in this paper \"nonvacuous\", ie they are always larger than one.\n\nIn summary, I think this is a strong paper. The explanatory power of the results are still oversold in my opinion, even if they use hedged language like \"could explain the role...\". But the work is definitely pointing the way towards an explanation and deserves publication. The technical results in the appendix will be of interest to the learning theory community.\n\nissues:\n\n\"could explain role of over-parametrization\". Perhaps this work might point the way to an explanation, but it does not yet provide an explanation. It is a big improvement it seems.\n\n\"bound improves over the existing bounds\". From this statement and the discussion comparing the bounds, it is not clear whether this bound formally dominates existing bounds or merely does so empirically (or under empirical conditions). \n\ntypos: \n\nbigger than the Lipschitz CONSTANT of the network class\n\nH undefined\n\nRademacher defined for H but must be defined on loss class (or a generic function class, not H)\n\n\"we need to cover\" --> \"it suffices to\"\n\n\"the following two inequaliTIES hold by Lemma 8\"\n\nbibliography is a mess: half of the arxiv papers are published. typos everywhere, very sloppy.\n\n(This review was requested late in the process due to another reviewer dropping out of the process.)\n\n[UPDATE]. The authors addressed my concerns stated in my review above. I think the bibliography has improved and I recommend acceptance. ", "We thank all reviewers for their useful feedback. The final revision is uploaded . This version has addressed all reviewers' comments. We believe that the quality of our paper has improved in the discussion process. We again thank all reviewers for their time and effort.", "Thank you for your valuable feedback. We have uploaded a revision addressing all your comments. \n\nIn particular, we have made the following changes:\n\n1) Improved the bibliography significantly \n2) Toned down the claims of the paper \n3) Fixed typos. \n\nThanks again for pointing to these issues.", "Thank you for the quick reply, at this point I believe both of the major issues are properly addressed, and the proofs are rigorous. As promised, I would recommend accepting this paper. \n\nOne more minor typo in Lemma 10 - in the last equation block where we plug in the value of \\| alpha \\|_p, I believe you initially plugged in the value of p-th power of it. Instead I believe it should be \n beta D^{1/2 - 1/p} (1 + D/K)^{1/p}\nOnce again, this is a very minor issue, and I can see the rest of the results follow from this correction. ", "Let me start by apologizing for the delayed review - in fact I was asked today to replace an earlier assigned reviewer. Hopefully the clarifications I request won't be too time consuming to meet the deadline coming up. \n\n###\n\nFirst of all, the problem which the authors are attempting to answer is quite important: the effect of over-parametrization is not well understood on a theoretical level. As the paper illustrate, 2-layer networks are already capable of generalizing while being over-parameterized, therefore justifying their setting. \n\nNext this paper motivates the study of complexity quantities that tend to decrease with the number of parameters, in particular figure 3 motivates the conjecture that the complexity measure in Theorem 2 can control generalization error. The paper also does a great job comparing related work, motivating their results. \n\n###\n\nAt this point, I would like to request a couple of clarifications in the proofs. Perhaps it's due to the fact that I only spent a day reading, but at least I think we could improve on its readability. Regardless, I currently do not yet trust a couple of the proofs, and I believe the acceptance of this paper should be conditioned on confirming the correctness of these proofs.\n\n(1) Let's start with Lemma 10. In the middle equation block, we obtain a bound \n \\| alpha^prime \\|_p^p <= beta^p ( 1 + D/K )\nand the proof concludes alpha^prime is in Q. However this cannot be the case for all alpha^prime. \n\nConsider x=0 which is in S_{p, beta}^D, then we have alpha^prime = 0 as well. In the definition of Q, we require all the j's to sum up to K+D, which is not met here. \n\nAt the same time, the next claim \n \\| alpha \\|_2 <= D^{1/2 - 1/p} \\| alpha^prime \\|_p\ndoes not seem to follow from the above calculations. In particular, alpha^prime seems to be defined with respect to an x in S_{p, beta}, however in this case we did not specify such an x. Perhaps did you mean there exist such an alpha^prime?\n\n(2) In the proof of Theorem 3, there is an important inequality needed to complete the proof \n max{ <s, f_i> , <s, -f_i> } >= 1/2 * ( <s, [f_i]_+> + <s, [-f_i]_+> )\n\nPerhaps I am missing something obvious, but I believe this inequality fails when we choose s as a constant vector, and f_i to have the same number of positive and negative signs (which is possible in a Hadamard matrix). In this case, the left hand side should be equal to zero, where as the right hand side will be positive. \n\n###\n\nTo summarize, if these proofs can be confirmed, I believe this paper would have made significant contribution to the problem of over-parametrization in deep learning, and of course should be accepted. \n\n###\n\nI corrected several typos and found minor issues as I read, perhaps this will be useful to improve readability as well.\n\nPage 13, proof of Lemma 8\n - after the V_0 term is separated, there is a sup over \\|V_0\\|_F <= r in the expectation, which should be \\|V-V_0\\|_F <= r instead.\n\nPage 14, Lemma 9\n - the lemma did not define rho_{ij} in the statement\n\nPage 15, proof of Lemma 9\n - in equation (12), there is an x_y vector that should x_t\n\nPage 15, proof of Theorem 1\n - while I eventually figured it out, it's unclear how Lemma 8 is applied here. Perhaps one more step identifying the exact matrices in the statement of Lemma 8 will be helpful to future readers, and maybe explain where the sqrt(2) factor come from as well. \n\nPage 16, proof of Lemma 10\n - in the beginning of the proof, to stay consistent with the notation, we should replace S_{p, beta} with S_{p, beta}^D\n - I believe the cardinality of Q should be (K + D - 1) choose (D - 1), as we need to choose positive j's to sum up to (K+D) in the definition of Q. This reduces down to the problem of choosing natural numbers j's summing K, which is (K+D-1) choose (D-1). Consider the stack exchange post here:\nhttps://math.stackexchange.com/questions/919676/the-number-of-integer-solutions-of-equations\n\nPage 16, proof and statement of Lemma 11\n - I believe in the first term, the factor should be m instead of sqrt(m). I think the mistake happened when applying the union bound, as it should only affect the term containing delta\n\nPage 17, Lemma 12\n - same as Lemma 11, we should have m instead of sqrt(m)\n\nPage 18, proof of Theorem 3\n - at the bottom the statement \"F is orthogonal\" does not imply the norm is less than 1, but rather we should say \"F is orthonormal\"\n\nPage 19, proof of Theorem 3\n - at the top, \"we will omit the index epsilon\" should be \"xi\" instead\n - in the final equation block, we have the Rademacher complexity of F_{W_2}, instead it should be F_{W^prime}\n\n", "Thanks for your positive feedback and suggested improvements. \n\n1) We have not claimed in the paper that our bound decreases with the network size but rather shows correlation with the test error, which is an empirical observation. To make this very clear, we have updated the abstract to emphasize that the correlation of the bound with the test error is for network sizes within the range reported in the experiments.\n \n2) Since the l_gamma loss is (\\sqrt{2}/gamma)-Lipschitz, the Rademacher complexity of l_gamma o F is (\\sqrt{2}/gamma) times Rademacher complexity of F so the important object to calculate the complexity measure is F and our lower bound is given for F. We will clarify this confusion in the final version. \n", "Thanks a lot for reading our paper very carefully and helping us improve the readability and validity of the proofs with your suggestions. We are glad that you found our paper to be a significant contribution to the understanding of over-parameterization in deep learning. We have applied all your suggestions in the revision which is uploaded in the openreview. Here we clarify the two issues you raised regarding the proofs:\n\n1) Lemma 10: As you guessed, it is indeed the case that the precise way to state is that “there exist such \\alpha’’ “. This \\alpha’’ can be constructed by simply increasing the value along the last dimension of the \\alpha’ to get the desired norm. We have updated the paper with the clarification.\n\n2) Theorem 3: You are right about the inequality in the proof of Theorem 3. This was a typo which can be fixed by replacing max{ <s, f_i> , <s, -f_i> } by max{ <s, [f_i]_+> , <s, [-f_i]_+> } in the left hand side. And this is indeed the quantity we use in the later part of the proof. We have corrected this typo in the revision.\n\nGiven that we have resolved the two issues you raised, we respectfully ask you to increase the score to reflect the significance of this work on understanding the role of over-parameterization in neural networks. We thank you again for your valuable feedback.", "It is shown empirically that common algorithms used in supervised learning (SGD) yield networks for which such upper bound decreases as the number of hidden units increases. This might explain why in some cases overparametrized models have better generalization properties.\n\nThis paper tackles the important question of why in the context of supervised learning, overparametrized neural networks in practice generalize better. First, the concepts of \\textit{capacity} and \\textit{impact} of a hidden unit are introduced. Then, {\\bf Theorem 1} provides an upper bound for the empirical Rademacher complexity of the class of 1-layer networks with hidden units of bounded \\textit{capacity} and \\textit{impact}. Next, {\\bf Theorem 2} which is the main result, presents a new upper bound for the generalization error of 1-layer networks. An empirical comparison with existing generalization bounds is made and the presented bound is the only one that in practice decreases when the number of hidden units grows. Finally {\\bf Theorem 3} is presented, which provides a lower bound for the Rademacher complexity of a class of neural networks, and such bound is compared with existing lower bounds.\n\n## Strengths\n- The paper is theoretically sound, the statement of the theorems\n are clear and the authors seem knowledgeable when bounding the\n generalization error via Rademacher complexity estimation.\n\n- The paper is readable and the notation is consistent throughout.\n\n- The experimental section is well described, provides enough empirical\n evidence for the claims made, and the plots are readable and well\n presented, although they are best viewed on a screen.\n\n- The appendix provides proofs for the theoretical claims in the\n paper. However, I cannot certify that they are correct.\n\n- The problem studied is not new, but to my knowledge the\n presented bounds are novel and the concepts of capacity and\n impact are new. Theorem 3 improves substantially over\n previous results.\n\n- The ideas presented in the paper might be useful for other researchers\n that could build upon them, and attempt to extend and generalize\n the results to different network architectures.\n\n- The authors acknowledge that there might be other reasons\n that could also explain the better generalization properties in the\n over-parameterized regime, and tone down their claims accordingly.\n\n## Weaknesses\n\\begin{itemize}\n- The abstract reads \"Our capacity bound correlates with the behavior\n of test error with increasing network sizes ...\", it should\n be pointed out that the actual bound increases with increasing\n network size (because of a sqrt(h/m) term), and that such claim\n holds only in practice.\n\n- In page 8 (discussion following Theorem 3) the claim\n \"... all the previous capacity lower bounds for spectral\n norm bounded classes of neural networks (...) correspond to\n the Lipschitz constant of the network. Our lower bound strictly\n improves over this ...\", is not clear. Perhaps a more concise\n presentation of the argument is needed. In particular it is not clear\n how a lower bound for the Rademacher complexity of F_W translates into a\n lower bound for the rademacher complexity of l_\\gamma F_W. This makes the claim of tightness of Theorem 1 not clear. Also this makes\n the initial claim about the tightness of Theorem 2 not clear.\n" ]
[ 7, -1, -1, -1, 7, -1, -1, 7 ]
[ 3, -1, -1, -1, 5, -1, -1, 3 ]
[ "iclr_2019_BygfghAcYX", "iclr_2019_BygfghAcYX", "Sygwq4iZC7", "SkezHFcITm", "iclr_2019_BygfghAcYX", "rkeQQzTK3m", "SkezHFcITm", "iclr_2019_BygfghAcYX" ]
iclr_2019_BygqBiRcFQ
Diffusion Scattering Transforms on Graphs
Stability is a key aspect of data analysis. In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision. Scattering transforms construct deep convolutional representations which are certified stable to input deformations. This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain. In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps. The resulting representation is stable to metric perturbations of the domain while being able to capture ''high-frequency'' information, akin to the Euclidean Scattering.
accepted-poster-papers
The paper gives an extension of scattering transform to non-Euclidean domains by introducing scattering transforms on graphs using diffusion wavelet representations, and presents a stability analysis of such a representation under deformation of the underlying graph metric defined in terms of graph diffusion.  Concerns of the reviewers are primarily with what type of graphs is the primary consideration (small world social networks or point cloud submanifold samples) and experimental studies. Technical development like deformation in the proposed graph metric is motivated by sub-manifold scenarios in computer vision, and whether the development is well suitable to social networks in experiments still needs further investigations. The authors make satisfied answers to the reviewers’ questions. The reviewers unanimously accept the paper for ICLR publication.
test
[ "Ske6Coy537", "HJllyFuFRm", "H1e0vqhd07", "SJlDUcn_R7", "B1glN93_0Q", "HkgFlmEW07", "Byl0YSBwaX", "HkgA_NSvaQ", "rJggb9NDaX", "HJx45ONA2X" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper introduces an adaptation of the Scattering transform to signals defined on graphs\nby relying on multi-scale diffusion wavelets, and studies a notion of stability of this representation\nwith respect to changes in the graph structure with an appropriate diffusion metric.\n\nThe notion of stability in convolutional networks is an important one, and the proposed notion of stability\nwith respect to diffusion distances seems like an interesting and relevant way to extend this to signals on graphs.\nWith this goal in mind, the authors introduce a scattering transform on graphs by relying on diffusion wavelets,\nand provide an appropriate study of stability, which seems to highlight relevant properties of the graphs.\nThe proposed representation seems to provide benefits compared to the previous work of Zou & Lerman,\nparticularly regarding computational efficiency, as well as stability with respect to a metric that is perhaps more\nuseful, though there is a dependence on the graph topology through the spectral gaps.\nIn addition, the experiments on author attribution and source localization suggest that the\nresulting representation remains discriminative, in addition to providing stability to changes in graph structure.\n\nI find that these contributions provide an interesting advance in theoretical understanding of graph convolutional networks\nfrom a stability perspective, in addition to introducing a useful non-learned representation,\nand am thus in favor of acceptance.\n\nNevertheless, some parts of paper would benefit from further discussions and more clarity:\n\n- other than empirically, one aspect that's missing compared to the original study of the scattering transform is energy preservation. The authors could at least provide a discussion of whether such a property can be obtained here as well (does it depend on the spectral gap through C(beta)?)\n\n- what is the role of the spectral gap in the stability bounds? is this a drawback of the diffusion metric / choice of wavelets?\n\n- Section 3.2 suggests that metric stability is a good way to characterize stability, by seeing deformations in Euclidian domains as a change to the ground metric. Yet, in Euclidian scattering, the same representation is applied to a deformed signal and the original signal, and stability is measured with the Euclidian metric.\nCan the link be made more precise, by explaining what a deformation of a signal would be on a graph, or by applying arguments from the proposed construction to the Euclidian case?\n\n- the paper is heavy on terminology from wavelets and harmonic analysis, a more detailed presentation of diffusion wavelets and related concepts such as localization would be beneficial. Also, it seems like the chosen wavelets in the construction favor spatial over frequential localization - is this due to a trade-off? if so, can it be avoided?\n\n\nSome more detailed comments:\n- Section 2, 'generally far weaker': what is meant by 'weaker'?\n- Section 3.3:\n * 'calculus on T': T is used before being defined\n * clarify what norm is used (I assume operator norm?)\n * 'defines a distance', 'stronger than .. GH': this should probably be justified\n- Section 4:\n * 'optimal spatial localization', 'temporal difference', 'favoring spatial over frequential localization': these could be clarified\n * 'amplify the signal': what does this mean?\n * the sentence about the choice of the appropriate J is not clear and should be further clarified\n- Section 5.1:\n * the sentence about the choice pi/pi* = 1 should be clarified. Also, where is this assumption used?\n * epsilon_psi, epsilon_U should be defined\n * 'given that [..] by definition': this doesn't seem to be defined elsewhere\n * (16): isn't a factor m missing in the first term?", "We have uploaded the revised version of the manuscript.\n\nMajor changes include:\n\n- New discussion on the role of the spectral gap on the obtained bounds;\n- New discussion on the choice of diffusion metrics to measure support deformations;\n- Rewriting of the experimental section to highlight its objective of illustrating discriminative power;\n- New experiments testing the representation difference sensitivity to the spectral gap;\n- New discussion on the manifold/expander type of graphs;\n- Fixing of several typos in the proofs;\n- Address of all the comments raised by the reviewers.\n\nWe would like to thank, once more, to all reviewers for taking the time to provide valuable feedback that has certainly improved the quality of the manuscript.", "REVIEW: “Overall, the recommendation is still in favor of acceptance. Finally, the reviewer would like to raise the following questions:\n\n- As the result is presented as an extension of the Euclidean scattering transform, will the stability result recover the traditional one in Mallat et al. when the underlying graph is a regular grid (though the definition of deformation differs in appearance)?”\n\nRESPONSE: Thank you very much for bringing up this very interesting aspect. The simple answer is no, because we do not have a notion of smoothness in the deformation field currently. In our case, we only see the deformation through its impact on the metric of the domain. In Euclidean scattering, on the other hand, one can relate the wavelet acting on signals at a specific scale with the regularity of the deformation field at that scale, and use a sophisticated result from harmonic analysis (the Cotlar-Stein quasi-orthogonality lemma) to bound the interactions across different scales. It is unclear to the authors that such result could be extended to non-Euclidean domains, and is currently part of our ongoing work on this subject.\n\nREVIEW: “- What about the computational efficiency, scalability and storage cost of the algorithm? It would also clarify the computational procedure by providing an algorithm box or release of code. However, assuming that the focus of the paper is on theory, computation is a relatively minor point.”\n\nRESPONSE: To compute any given coefficient, we have to compute U(rho Psi)^{k} x. Applying Psi is just weighing two neighborhoods, which ignoring any kind of sparsity, entails a computation cost of JN^2. Applying U entails JN operations. The total cost is thus m(JN^2+JN) where m is the depth of the representation.\n\nREVIEW: “[1] Boscaini, D., Masci, J., Rodolà, E., & Bronstein, M. (2016). Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 3189-3197).”", "REVIEW: “- The deformation considered is proposed in terms of the graph metric perturbation, which appears to lack sufficient motivation. This is not apparent from the experimental results (How is the x-axis of plots in Figure 1 related to \"perturbation\"?). Will such deformation reduce to that as being considered for irregular meshes in computer vision applications e.g. in [1]? The proposed model would be more convincing if the class of \"covered deformations\" can be clarified and the relevance to practices of non-Euclidean CNNs can be better addressed.”\n\nRESPONSE: Thank you for this important remark. The motivation behind the choice of graph metric perturbation is to define a notion that is intrinsic, i.e. not relying on any extrinsic Euclidean embedding of the domain. That said, the rationale behind the choice of diffusion metrics is primordially of mathematical nature (tractability). With respect to the simulations, we intended to present graph perturbations that would be rooted in practical situations (edge failures -unfriending- in the case of the social network graph, and faulty graph modeling -limited available data to build the supporting graph- in the case of authorship attribution) that still illustrate the stability property of the proposed scattering transform. These perturbations are related to diffusion metrics in the sense that the possible diffusion paths are being altered between the different underlying graphs that support the data.\n\nWith regards to the relationship between our diffusion-based deformation model and those in computer vision/graphics, we emphasize two points. (i) Our notion is purely intrinsic, so it captures the isometric invariances enjoyed by intrinsic shape/manifold representations (including some non-euclidean CNN constructions). (ii) Our notion does not leverage the Euclidean ambient space to define smoothness of a deformation field, which could be used in irregular meshes/manifolds to control the impact of the deformation on scattering coefficients specifically at each scale and thus improve the stability bound.\n\nREVIEW: “- The experimental section lacks performance and comparison in a controlled environment, e.g., on synthetic data with more samples to show statistical significance. There also appears to be a gap between theory and experiment: can the dependence on spectral gap be empirically supported, even only qualitatively, e.g., by a comparison on small world graph v.s. others?”\n\nRESPONSE: We would like to thank the reviewer for the suggestion. We have added to the simulation section a first experiment where we create synthetic small world graphs and compute the scattering representation of random signals supported in these graphs. More specifically, we generate graphs with the same parameters p_SW (edge probability) and q_SW (rewiring probability) in order to fix a spectral gap (which is related to p_SW), and then compute the difference between the scattering representation of the same signal over these graphs. Then, we change p_SW to change the spectral gap. We observe that, indeed, as the spectral gap get smaller (beta gets closer to 1), the representation difference grows larger, showing that graphs with smaller spectral gap give place to less stable representations. Please, see this section for more details.", "REVIEW: “The paper introduces scattering transforms on graphs by adopting diffusion wavelet constructions, and gives an extension of the scattering transforms to non-Euclidean domains. The main result consists of a stability analysis of the non-adaptive representation under deformation of the underlying graph metric, also defined in terms of graph diffusion.”\n\nRESPONSE: We would like to thank the reviewer for taking the time to review this manuscript and providing valuable feedback.\n\nREVIEW: “Pros: \n\nThe study addresses the important problem of representation stability of non-Euclidean CNNs, which is a timely topic. The theoretical analysis builds an interesting connection between the diffusion graph geometry and the analysis of deep networks.\n\nCons: \n\n- It is unclear what type of graph is the primary consideration, either (a) expander/small-world with large spectral gap, e.g. social network, or (b) irregular mesh embedded in a Euclidean space or on an intrinsic manifold - as originally considered in diffusion wavelet, and in the Euclidean CNN/scattering transform theory - which often fails to present a spectral gap. The formula suggests (b) while the analysis and experiments point to (a). A coverage of both cases is unlikely.”\n\nRESPONSE: We thank the reviewer for bringing up this very fundamental point. We acknowledge that the provided stability results depend on the spectral gap which makes its application dependent on the particular graph. As the reviewer points out, there are graphs that exhibit a large spectral gap while others exhibit a small spectral gap, thus making our stability bounds less effective.\n\nWe would like to remark that the only connected graph to present no spectral gap is the one that contains a non-trivial bipartite component (Chung, 1997). But since we are considering lazy diffusion (i.e., we use as a generator T = (I +A)/2) these degenerate cases are avoided. In particular, for regular graphs with self-loops, it is known that the spectral gap is lower bounded by (dn)^(-2), where d is the degree. That said, it is true that this gap is not uniform in the size of the graph, and the rate n^(-2) reflects the fact that as the domain grows, so does the number of effective scales. We emphasize that this situation reflects a limitation of our current bound, and that it is likely that one can improve it (in particular, in eq. 23) by leveraging regular degree structures (as in grids). We will work on this direction and keep the reviewer updated. \n\nThe distinction between these two extremal regimes (expanders vs. manifolds) is also fundamental when it comes to our domain deformation model. Whereas in the former the natural notion of deformation is necessarily intrinsic (and we chose to use diffusion distances but other intrinsic notions are possible), in the latter the deformations can be ‘inherited’ from the extrinsic space, and typically one can get finer analysis using Euclidean deformations. For instance, in our framework we have not defined the ‘smoothness’ of a deformation, which plays a crucial role in Euclidean Scattering to avoid a deformation bound that degrades with the bandwidth of the signal. This is an important direction for future work.\n\nWe have added relevant comments after eq. (3) and before Proposition 4.1.\n", "The paper introduces scattering transforms on graphs by adopting diffusion wavelet constructions, and gives an extension of the scattering transforms to non-Euclidean domains. The main result consists of a stability analysis of the non-adaptive representation under deformation of the underlying graph metric, also defined in terms of graph diffusion.\n\nPros: \n\nThe study addresses the important problem of representation stability of non-Euclidean CNNs, which is a timely topic. The theoretical analysis builds an interesting connection between the diffusion graph geometry and the analysis of deep networks.\n\nCons: \n\n- It is unclear what type of graph is the primary consideration, either (a) expander/small-world with large spectral gap, e.g. social network, or (b) irregular mesh embedded in a Euclidean space or on an intrinsic manifold - as originally considered in diffusion wavelet, and in the Euclidean CNN/scattering transform theory - which often fails to present a spectral gap. The formula suggests (b) while the analysis and experiments point to (a). A coverage of both cases is unlikely. \n\n- The deformation considered is proposed in terms of the graph metric perturbation, which appears to lack sufficient motivation. This is not apparent from the experimental results (How is the x-axis of plots in Figure 1 related to \"perturbation\"?). Will such deformation reduce to that as being considered for irregular meshes in computer vision applications e.g. in [1]? The proposed model would be more convincing if the class of \"covered deformations\" can be clarified and the relevance to practices of non-Euclidean CNNs can be better addressed.\n\n- The experimental section lacks performance and comparison in a controlled environment, e.g., on synthetic data with more samples to show statistical significance. There also appears to be a gap between theory and experiment: can the dependence on spectral gap be empirically supported, even only qualitatively, e.g., by a comparison on small world graph v.s. others?\n\nOverall, the recommendation is still in favor of acceptance. Finally, the reviewer would like to raise the following questions:\n\n- As the result is presented as an extension of the Euclidean scattering transform, will the stability result recover the traditional one in Mallat et al. when the underlying graph is a regular grid (though the definition of deformation differs in appearance)?\n\n- What about the computational efficiency, scalability and storage cost of the algorithm? It would also clarify the computational procedure by providing an algorithm box or release of code. However, assuming that the focus of the paper is on theory, computation is a relatively minor point.\n\n[1] Boscaini, D., Masci, J., Rodolà, E., & Bronstein, M. (2016). Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 3189-3197).\n", "REVIEW: “- Section 3.2 suggests that metric stability is a good way to characterize stability, by seeing deformations in Euclidian domains as a change to the ground metric. Yet, in Euclidian scattering, the same representation is applied to a deformed signal and the original signal, and stability is measured with the Euclidian metric. Can the link be made more precise, by explaining what a deformation of a signal would be on a graph, or by applying arguments from the proposed construction to the Euclidian case?”\n\nRESPONSE: Thank you very much for raising this very interesting point. The setting of the paper by Mallat, 2012, and Bruna and Mallat, 2013 is a continuous setting, where an image is described as a map from R^2 to R. In this sense, a deformation can be understood as a change of variables, or as a change of the underlying support. Nevertheless, the definition of deformation in that work does not intrinsically alter R^2 because it is still a continuous space. As an example, we can easily think of a deformation of a continuous image, but if we think of a digital image, we need to go through a process of resampling to model the deformed signal. This is due exclusively to the regular nature of the image grid and the existence of a sampling theorem between the continuous and the discrete spaces.\n\nIn the case of graphs, such a resampling method is not unique, and therefore, we cannot use the same notion of continuous deformation that is used in Bruna and Mallat, 2013. We therefore opt to define a deformation in terms of the underlying discrete support of the signal under study. In the case of an image, this would be understood that deforming an image essentially alters the distance between pixels (determined by the sampling mechanism) and therefore, alters the underlying grid structure in itself. More precisely, if in the original image, all pixels are at a distance of “1” (normalized by the resolution of the image), in a deformed version of the image, we are actually altering this distance, which is analogous to altering the edges of the underlying grid. This understanding of deformation as a change in the underlying support is what motivates our definition of deformation of a graph.\n\nIn other words, in the original Euclidean scattering, the data domain is fixed (hence the wavelet decomposition is fixed) and deformation is studied by measuring how the wavelet decomposition of x’(u) = x(u -tau(u)) varies with tau. Observe however that \nx’ * psi(u) = int x’(v) psi(u-v) dv = int x(v - tau(v)) psi(u-v) dv = int x(v’)psi(u-v’+beta(v’)) |det (I-Dtau(beta(v’))|^(-1) dv’, which is a perturbed convolution of the same signal x on a deformed metric domain. \n\nWe have added a comment on this intrinsic difference between deformations in the Euclidean space and in general graphs at the beginning of section 3.3 to motivate our choice of graph deformation.\n\nREVIEW: “- the paper is heavy on terminology from wavelets and harmonic analysis, a more detailed presentation of diffusion wavelets and related concepts such as localization would be beneficial. Also, it seems like the chosen wavelets in the construction favor spatial over frequential localization - is this due to a trade-off? if so, can it be avoided?”\n\nRESPONSE: We apologize for the lack of adequate coverage of signal processing related concepts such as space-frequency duality. We have reworded the presentation of diffusion wavelets in the hope of making it more clear. In general, one would require as many filter taps as the size of the graph to be able to perfectly filter each individual frequency component. Nevertheless, as with the Euclidean case, this leads to instability (very narrow frequency filters imply that a small change in the frequency content leads to a huge change in the filtering output).\n\nThe diffusion wavelets, indeed, favor spatial localization over frequency localization. They can be constructed with just two spatial coefficients. This can indeed be avoided with more general graph wavelets, and this is the subject of our current work, where we obtained stability bounds with respect to the number of filter taps (spatial localization) used to implement the graph wavelets.\n\nREVIEW: “Some more detailed comments (...)”\n\nRESPONSE: We have carefully addressed all these comments in the revised manuscript.\n\nOnce again, we would like to thank the reviewer for the time and effort spent in providing detailed feedback that has surely helped improve the quality of our work.", "REVIEW: “The paper introduces an adaptation of the Scattering transform to signals defined on graphs by relying on multi-scale diffusion wavelets, and studies a notion of stability of this representation with respect to changes in the graph structure with an appropriate diffusion metric.\n\nThe notion of stability in convolutional networks is an important one, and the proposed notion of stability with respect to diffusion distances seems like an interesting and relevant way to extend this to signals on graphs. With this goal in mind, the authors introduce a scattering transform on graphs by relying on diffusion wavelets, and provide an appropriate study of stability, which seems to highlight relevant properties of the graphs. The proposed representation seems to provide benefits compared to the previous work of Zou & Lerman, particularly regarding computational efficiency, as well as stability with respect to a metric that is perhaps more useful, though there is a dependence on the graph topology through the spectral gaps.\n\nIn addition, the experiments on author attribution and source localization suggest that the resulting representation remains discriminative, in addition to providing stability to changes in graph structure.\n\nI find that these contributions provide an interesting advance in theoretical understanding of graph convolutional networks from a stability perspective, in addition to introducing a useful non-learned representation, and am thus in favor of acceptance.”\n\nRESPONSE: We thank the reviewer for the time spent in reviewing this manuscript and offering valuable insights to improve our work.\n\n*Due to the character limitation, and for a clear and detailed point-to-point answer to all the very interesting comments raised by the reviewer, we split these responses in two comments. Thank you for your understanding.*\n\nREVIEW: “Nevertheless, some parts of paper would benefit from further discussions and more clarity:\n\n- other than empirically, one aspect that's missing compared to the original study of the scattering transform is energy preservation. The authors could at least provide a discussion of whether such a property can be obtained here as well (does it depend on the spectral gap through C(beta)?)”\n\nRESPONSE: Thank you for bringing up this important point. Energy preservation in scattering representations requires a unitary wavelet decomposition. In the original scattering paper (Mallat, 2012), it was proved for a restricted class of unitary wavelets in R^d, and later extended to more general families of analytic wavelets (Wiatowski, Grohs and Bölcskei, 2017). Extending energy preservation to graph scattering thus amounts to constructing graph wavelets that are unitary and analytic. Whereas constructing unitary wavelets on graphs is easily achieved (for instance designing the wavelets in the spectral domain), the analytic property in general graphs requires a notion of frequency ‘pairings’ (analogously as one associates sine and cosine in an Euclidean domain). This is a fascinating direction of future research. We note that our diffusion wavelets are neither unitary nor analytic. However, as you correctly observed, the frame bounds do depend on the domain via the spectral gap. The lower bound is (1-beta)^2, so the smaller 1-beta is, the less “unitary” our diffusion wavelets are. \n\nWe have mentioned this issue after Proposition 4.1 which establishes bounds on the energy change when applying the graph diffusion wavelet bank.\n\nREVIEW: “- what is the role of the spectral gap in the stability bounds? is this a drawback of the diffusion metric / choice of wavelets?”\n\nRESPONSE: The spectral gap is tightly linked with diffusion processes on graphs, and thus it does emerge from the choice of a diffusion metric. Graphs with values of beta closer to 1, exhibit weaker diffusion paths, and thus a small perturbation on the edges of these graphs would lead to a larger diffusion distance. The contrary holds as well. In other words, the tolerance of the graph to edge perturbations (i.e., d(G,G’) being small) depends on the spectral gap of the graph. Another interpretation of the spectral gap is as the ‘bandwidth’ of the domain. The larger the gap (thus smaller beta), the faster the domain diffusion blurs an input arbitrary signal, therefore limiting our ability to discriminate high-frequency information. A byproduct of this reduced bandwidth (small beta) are improved stability bounds.\n\nWe have added a remark about this aspect at the end of section 5.\n", "REVIEW: “The paper presents in interesting and new analysis of stability of scattering transforms on graphs, when the domain (graph) is perturbed by deformations. It combines key ingredients of scattering transforms (extended here to graphs through graph diffusion wavelets), deformation of graphs (based on graph diffusion distances) and a theoretical stability analysis. Similarly to the Euclidian domain, it is shown that linear filters cannot provide representations that are simultaneously rich and stable.”\n\nRESPONSE: Thank you very much for the time spent in reviewing this manuscript and the valuable insights provided.\n\nREVIEW: “Generally, the paper is pretty complete, interesting and sufficiently well presented. One might wonder if the choice of the diffusion framework for both the representation construction, and the deformation analysis, is a simplistic choice, and how similar ideas could extend to different domain deformation for example.”\n\nRESPONSE: We thank the reviewer for bringing up this point. We would like to let the reviewer know that the choice of diffusion distance and consequently, diffusion wavelets, is rooted in mathematical tractability. Nevertheless, the general framework for the proof is extensible to any other domain or deformation measure that offers bounds similar to those in Lemmas 5.1 and 5.2. In fact, we are currently working on extensions to general wavelet graph filters and Gromov-Hausdorff distances, as well as continuous domains.\n\nWe have added comments on this after Def. 3.1 where the diffusion distance is introduced, and in the conclusions to highlight future work.\n\nREVIEW: “The experiments and comparisons are also very minimal, and hard to interpret. Comparing things only with GFT and SVM is probability a 'easy' choice, with the advent of a plethora of new graph convnets architectures (GFT is probably not a 'graph baseline', as mentioned in the conclusion).”\n\nRESPONSE: First and foremost, we would like to apologize for the statements included in the numerical section that might have accidentally misrepresented the claims made in that section. We consider this work to be of theoretical value, showing stability of deep learning constructions on graphs. In this sense, the objective of the simulation section is to illustrate, as well, the ability of the graph scattering transforms to obtain meaningful representations. By no means is our intention to claim that we perform better than the state of the art, and we apologize for any such misunderstanding that might have arised from our writing in that section.\n\nThe choice of comparison methods is guided by the following rationale. We know that linear methods are unstable under deformations, and we propose an alternative representation that we prove is stable. Therefore, for the numerical sections, we chose two widely used linear models: one data-based (SVM) and one structural-based (GFT) with the aim of showing that there is no loss in the representation capability by using a stable representation. In other words, our main goal of this section is to prove that we can do as well as linear models, while guaranteeing for stability. As a byproduct, we observed that the proposed representation outperforms these linear models, but this, by no means, implies that this is the best possible solution for these problems.\n\nWith respect to comparing with graph convnets, we find that the methods are conceptually different since graph scattering representations do not involve any training. In this respect, we could compare with graph convnets with a minimal training set of one (or zero) in which case scattering transforms would do better, or we could compare with graph convnets for the case in which we have a large training set, in which case graph convnets will surely fare better. Therefore, we believe these methods not to be comparable. Finally, we note that the stability analysis that we carried out for diffusion scattering also applies to Graph Convnets (see corollary 5.3). In this case, our stability bounds depend on the trainable parameters, and are generally not tight. Similarly as in the case of images, and as illustrated by adversarial examples, there is an underlying tradeoff between discriminability and stability.\n\nAll in all, we have completely rewritten the simulation section making emphasis on the illustrative nature of these examples, and have carefully worded every claim to avoid any misunderstanding.\n\nREVIEW: “This is however an interesting work, that will likely generate exciting discussions at ICLR.”\n\nRESPONSE: We would like to thank the reviewer again for their time and effort spent in providing valuable feedback on our work.", "The paper presents in interesting and new analysis of stability of scattering transforms on graphs, when the domain (graph) is perturbed by deformations. It combines key ingredients of scattering transforms (extended here to graphs through graph diffusion wavelets), deformation of graphs (based on graph diffusion distances) and a theoretical stability analysis. Similarly to the Euclidian domain, it is shown that linear filters cannot provide representations that are simultaneously rich and stable. \n\nGenerally, the paper is pretty complete, interesting and sufficiently well presented. One might wonder if the choice of the diffusion framework for both the representation construction, and the deformation analysis, is a simplistic choice, and how similar ideas could extend to different domain deformation for example. The experiments and comparisons are also very minimal, and hard to interpret. Comparing things only with GFT and SVM is probability a 'easy' choice, with the advent of a plethora of new graph convnets architectures (GFT is probably not a 'graph baseline', as mentioned in the conclusion). \n\nThis is however an interesting work, that will likely generate exciting discussions at ICLR. " ]
[ 7, -1, -1, -1, -1, 6, -1, -1, -1, 9 ]
[ 3, -1, -1, -1, -1, 4, -1, -1, -1, 5 ]
[ "iclr_2019_BygqBiRcFQ", "iclr_2019_BygqBiRcFQ", "SJlDUcn_R7", "B1glN93_0Q", "HkgFlmEW07", "iclr_2019_BygqBiRcFQ", "HkgA_NSvaQ", "Ske6Coy537", "HJx45ONA2X", "iclr_2019_BygqBiRcFQ" ]
iclr_2019_Byl8BnRcYm
Capsule Graph Neural Network
The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings. Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs. Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument.
accepted-poster-papers
AR1 asks for a clear experimental evaluation showing that capsules and dynamic routing help in the GCN setting. After rebuttal, AR1 seems satisfied that routing in CapsGNN might help generate 'more representative graph embeddings from different aspects'. AC strongly encourages the authors to improve the discussion on these 'different aspects' as currently it feels vague. AR2 is initially concerned about experimental evaluations and whether the attention mechanism works as expected, though, he/she is happy with the revised experiments. AR3 would like to see all biological datasets included in experiments. He/she is also concerned about the lack of ability to preserve fine structures by CapsGNN. The authors leave this aspect of their approach for the future work. On balance, all reviewers felt this paper is a borderline paper. After going through all questions and responses, AC sees that many requests about aspects of the proposed method have not been clarified by the authors. However, reviewers note that the authors provided more evaluations/visualisations etc. The reviewers expressed hope (numerous times) that this initial attempt to introduce capsules into GCN will result in future developments and improvements. While AC thinks this is an overoptimistic view, AC will give the authors the benefit of doubt and will advocate a weak accept. The authors are asked to incorporate all modifications requested by the reviewers. Moreover, 'Graph capsule convolutional neural networks' is not a mere ArXiV work. It is an ICML workshop paper. Kindly check all ArXiV references and update with the actual conference venues.
train
[ "ByxrZRED3m", "S1gMPXrcn7", "rJlwCtK5nm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper fuses Capsule Networks with Graph Neural Networks. The idea seems technically correct and is well-written. With 13 pages the paper seems really long. Moreover, the experimental part seems to be too short. So, the theoretical and experimental part is not well-balanced.\n\nMinor concerns/ notes to the authors:\n1.\tPage 1: The abbreviation GNN is used before it is defined.\n2.\tPage 2: I guess there is a mistake in your indices. Capital N == n or?\n3.\tPage 4: What is \\mathbf{I}? I guess you mean the identity matrix.\n4.\tPage 4: Could you define/describe C_all?\n5.\tPage 5: Can you describe how you perform the coordinate addition or add a reference?\n6.\tPage 6: The idea to use reconstruction as regularization method is not new. May you can add a respective reference?\n7.\tPage 8: The abbreviations in your result tables are confusing. They are not aligned with the text. For example, what is Caps-CNN for a model?\n\nMy major concern is about your experimental evaluation. Under a first look the result tables looking great. But that’s due to fact, that you marked the two best values in bold type. To be more precise, the method WL is in the most cases better than your proposed method. This makes me wondering if there is a real improvement by your method. It would be easier to decide if you would present the training/inference times and the number of parameters. By having that, I could relate your results regarding an accuracy-complexity tradeoff. Moreover, your t-SNE and attention visualizations are not convincing. As you may know, the output of a t-SNE strongly dependents on the chosen hyper-parameters like the perplexity, etc. You not mentioned the setting of these values. Additionally, it is hard to decide if your embeddings are good or not because you are not presenting a baseline or referencing a respective work. You are complaining that this is due to the space restrictions. But you have unlimited capacity in the appendix. So please provide some clarifying plots. Finally, I’m also not convinced that your attention mechanism works as expected. It’s again due to missing baseline results and/or a reference. If it’s not possible to add one of them, you could perform an easy experiment where you freeze your fully-connected layers of the attention module to fixed values (maybe such that it performs just an averaging) and repeat your experiments. In case your attention module works as expected you should observe a real change in terms of accuracy and in your visualizations too.\nYou could also think about to publish your code or present further results/plots in a separate blog. \n\nUpdate:\nAccording to the revised version which addresses a lot of my concerns, I vote for marginally above acceptance threshold.\n", "The authors provide an architecture that applies recent advances in the field of capsule networks in the graph neural network domain. First, hierarchical node level capsules are extracted using GCN layers. Second, after weighting each capsule by the output of a proposed attention module, graph level capsules are computed by performing a global dynamic routing. These graph level capsules are used for training a capsule classifier using a margin loss and a reconstruction loss.\n\nThe general architecture seems to be a reasonable application of the capsule principle in the graph domain, following the proof of concept MNIST architecture proposed by Sabour et al.\n\nMy main concern is that I have problems grasping the motivation behind using capsules in the given scenario. Besides an unprecise motivation in the introduction, there is no clear reason why the routing mechanism helps with solving the given tasks. Capsule networks capture pose covariances by applying a linear, trainable transformation to pose vectors and computing the agreement of the resulting votes. It is not clear to me how discrete information like graph connectivity can be encoded in a pose vector so that linear transformations are able to match different \"connectivity poses\".\n\nIs there a more formal argument that explains why capsules should be able to capture more information about the input graph than other GCNNs?\n\nAlso, some design choices seem to be quite arbitrary. One example is using the last feature maps of the GCN as positions for coordinate addition. Is there a theoretical/intuitive motivation for this?\n\nResults for the given experiments show improvement on some graphs. However, the authors proposed several concepts: a global pooling method using dynamic routing, an attention mechanism, a novel reconstruction loss, interpreting deep node embeddings as spatial positions. It is not clear to what extent the individual aspects of the method contribute to the gains. The qualitative capsule embedding analysis is interesting. However, this part needs a comparison to standard global graph embeddings to see if there is a significant difference.\n\nIn my opinion, the paper needs:\n1) a clear experimental evaluation showing that capsules and the dynamic routing lead to improved results (i.e. by providing an ablation study to show which gains result from the attention-based global pooling mechanism, the reconstruction loss, the dynamic routing and from the coordinate addition), or\n2) a more precise motivation for the use of dynamic routing to capture correlation between pose vectors in graphs in general (i.e. formal arguments why the method is stronger in capturing statistics or for what types of graphs it provides more discriminative power).\n\nOverall, the paper does not convince me that capsules and dynamic routing provide advantages if used like the authors propose. Therefore, I tend to voting for rejecting the paper as long as points 1) and 2) are not addressed properly.\n\n\nMinor remarks:\n\n- There are quite a lot of grammatical errors (especially missing articles).\n\n--------------------------\nUpdate:\nThe authors addressed some of the weak points mentioned above adequately. The experimental evaluation was significantly improved and the results are a nice contribution. However, the theoretical contribution and the poor motivation of capsules in the graph context remain weak points. I have updated my rating accordingly.", "This paper was written with good quality and clarity. Their idea was original and experiment results show the proposed CapsGNN is effective in large graph data analysis, particularly on graphs with macroscopic properties.\n\nPros:\n\n1) The paper makes a clear and detailed comparison between the proposed CapsGNN and the related models in section 3.2.\n\n2) Use of capsules nets and routing in CapsGNN are close to that in the original CapsNet, with the core characteristics (and potential advantages) of capsules and dynamic routing being perserved in the proposed CapsGNN to handle the targeted problem. \n\n3) The comparison and model analysis are thorough and comprehensive.\n\nCons or unclear points:\n\n1) Why the paper does not include all biological datasets (6 datasets in total, only 4 used in this papaer) presented in (Verma & Zhang, 2018) in the experiment section. The experiments in Verma & Zhang, (2018) show that the GCAPS-CNN achieved SOTA results on nearly all biological datasets. Does GCAPS-CNN outperformed CapsGNN on biological datasets? It will be nice if there is comparison on more datasets and more analysis is provided between CapsGNN and GCAPS-CNN.\n\n2) Why CapsGNN is not suitable for preserving information of fine structures? Can the authors give more explanation and discussions?\n" ]
[ 6, 6, 6 ]
[ 4, 4, 4 ]
[ "iclr_2019_Byl8BnRcYm", "iclr_2019_Byl8BnRcYm", "iclr_2019_Byl8BnRcYm" ]
iclr_2019_BylBr3C9K7
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving techniques, our framework trains DNNs that provide higher accuracies under same or lower energy budgets.
accepted-poster-papers
All of the reviewers agree that this is a well-written paper with the novel perspective of minimizing energy consumption in neural networks, as opposed to maximizing sparsity, which does not always correlate with energy cost. There are a number of promised clarifications and additional results that have emerged from the discussion that should be put into the final draft. Namely, describing the overhead of converting from sparse to dense representations, adding the Imagenet sparsity results, and adding the time taken to run the projection step.
val
[ "ByeTwU9O3m", "SkgP-w3aRQ", "r1xncHxi2X", "Sylap5jTAX", "ryxef2id07", "H1g_J-vO6m", "rJlG0gwdpQ", "H1em0nLOTm", "rygIrxvOpm", "ByxoQev_T7", "BJgr8CIOpX", "ByeuQTS92Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a method for neural network training under a hard energy constraint (i.e. the method guarantees the energy consumption to be upper bounded). Based on a systolic array hardware architecture the authors model the energy consumption of transferring the weights and activations into different levels of memory (DRAM, Cache, register file) during inference. The energy consumption is therefore determined by the number of nonzero elements in the weight and activation tensors. To minimize the network loss under an energy constraint, the authors develop a training framework including a novel greedy algorithm to compute the projection of the weight tensors to the energy constraint.\n\nPros:\n\nThe proposed method allows to accurately impose an energy constraint (in terms of the proposed model), in contrast to previous methods, and also yields a higher accuracy than these on some data sets. The proposed solution seems sound (although I did not check the proofs in detail, and I am not very familiar with hardware energy consumption subtleties).\n\nQuestions:\n\nThe experiments in Sec. 6.2 suggest that the activation mask is mainly beneficial when the data is highly structured. How are the benefits (in terms of weight and activation sparsity) composed in the experiments on Imagenet? How does the weight sparsity of the the proposed method compare to the related methods in these experiments? Is weight sparsity in these cases a good proxy for energy consumption?\n\nHow does the activation sparsity (decay) parameter (\\delta) q affect the accuracy-energy consumption tradeoff for the two data sets?\n\nThe authors show that the weight projection problem can be solved efficiently. How does the guarantee translate into wall-clock time?\n\nFilter pruning methods [1,2] reduce both the size of the weight and activation tensors, while not requiring to solve a complicated projection problem or introducing activation masks. It would be good to compare to these methods, or at least comment on the gains to be expected under the proposed energy consumption model.\n\nKnowledge distillation has previously been observed to be quite helpful when constraining neural network weights to be quantized and/or sparse, see [3,4,5]. It might be worth mentioning this.\n\nMinor comments:\n- Sec. 3.4. 1st paragraph: subscript -> superscript\n- Sec. 6.2 first paragraph: pattens -> patterns, aliened -> aligned\n\n[1] He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. ICCV 2017.\n[2] Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. Pruning filters for efficient convnets. ICLR 2017.\n[3] Mishra, A., & Marr, D. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. ICLR 2018.\n[4] Tschannen, M., Khanna, A., & Anandkumar, A. StrassenNets: Deep learning with a multiplication budget. ICML 2018.\n[5] Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. Towards effective low-bitwidth convolutional neural networks. CVPR 2018.", "Thank you for the detailed response. I think the paper became more convincing and I will adapt my rating.", "This paper describes a procedure for training neural networks via an explicit constraint on the energy budget, as opposed to pruning the model size as commonly done with standard compression methods. Comparative results are shown on a few data sets where the proposed method outperforms multiple different approaches. Overall, the concept is interesting and certainly could prove valuable in resource-constrained environments. Still I retain some reservations as detailed below.\n\nMy first concern is that this paper exceeds the recommended 8 page limit for reasons that are seemingly quite unnecessary. There are no large, essential figures/tables, and nearly the first 6 pages is just introduction and background material. Likewise the paper consumes a considerable amount of space presenting technical results related to knapsack problems and various epsilon-accurate solutions, but this theoretical content seems somewhat irrelevant and distracting since it is not directly related to the greedy approximation strategy actually used for practical deployment. Much of this material could have been moved to the supplementary so as to adhere to the 8 page soft limit. Per the ICLR reviewer instructions, papers deemed unnecessarily long relative to this length should be judged more critically.\n\nAnother issue relates to the use of a mask for controlling the sparsity of network inputs. Although not acknowledged, similar techniques are already used to prune the activations of deep networks for compression. In particular, various forms of variational dropout essentially use multiplicative weights to remove the influence of activations and/or other network components similar to the mask M used is this work. Representative examples include Neklyudov et al., \"Structured Bayesian Pruning via Log-Normal Multiplicative Noise,\" NIPS 2017 and Louizos et al., \"Bayesian Compression for Deep Learning,\" NIPS 2017, but there are many other related alternatives using some form of trainable gate or mask, possibly stochastic, to affect pruning (the major ML and CV conferences over the past year have numerous related compression papers). So I don't consider this aspect of the paper to be new in any significant way.\n\nMoreover, for the empirical comparisons it would be better to compare against state-of-the-art compression methods as opposed to just the stated MP and SSL methods from 2015 and 2016 respectively. Despite claims to the contrary on page 9, I would not consider these to be state-of-the-art methods at this point.\n\nAnother comment I have regarding the experiments is that hyperparameters and the use of knowledge distillation were potentially tuned for the proposed method and then simultaneously applied to the competing algorithms for the sake of head-to-head comparison. But to me, if these enhancements are to be included at all, tuning must be done carefully and independently for each algorithm. Was this actually done? Moreover it would have been nice to see results without the confounding influence of distillation to isolate sources of improvement, but no ablation studies were presented.\n\nFinally, regarding the content in Section 5, the paper carefully presents an explicit bound on energy that ultimately leads to a constraint that is NP-hard just to project on to, although approximate solutions exist that depend on some error tolerance. However, even this requires an algorithm that is dismissed as \"complicated.\" Instead a greedy alternative is derived in the Appendix which presumably serves as the final endorsed approach. But at this point it is no longer clear to me exactly what performance guarantees remain with respect to the energy bound. Theorem 3 presents a fairly inscrutable bound, and it is not at all transparent how to interpret this in any practical sense. Note that after Theorem 3, conditions are described whereby an optimal projection can be obtained, but these seem highly nuanced, and unlikely to apply in most cases.\n\nAdditionally, it would appear that crude bounds on the energy could also be introduced by simply penalizing/constraining the sparsity on each layer, which leads to a much simpler projection step. For example, a simple affine function of the L0 norm would be much easier to optimize and could serve as a loose bound on the energy, given that the latter should be a non-decreasing function of the L0 norm. Any idea how such a bound compares to those presented given all the approximations and greedy steps that must be included?\n\n\nOther comments:\n- As an implementation heuristic, the proposed Algorithm 1 gradually decays the parameter q, which controls the sparsity of the mask M. But this will certainly alter the energy budget, and I wonder how important it is to employ a complex energy constraint if minimization requires this type of heuristic.\n\n- I did not see where the quantity L(M,W) embedded in eq. (17) was formally defined, although I can guess what it is.\n\n- In general it is somewhat troublesome that, on top of a complex, non-convex deep network energy function, just the small subproblem required for projecting onto the energy constraint is NP-hard. Even if approximations are possible, I wonder if this extra complexity is always worth it relative so simple sparsity-based compression methods which can be efficiently implemented with exactly closed-form projections.\n\n- In Table 1, the proposed method is highlighted as having the smallest accuracy drop on SqueezeNet. But this is not true, EAP is lower. Likewise on AlexNet, NetAdapt has an equally optimal energy.", "The authors provided reasonable clarifications, so I will bump up my score.", "I would like to thanks the authors for the response that clarifies my questions. I would suggest adding several lines describing the overhead of packing and unpacking of sparse representation in the final revision of the paper. I agree with the authors that methods from Louizos et al., NIPS'17 and Neklyudov et al., NIPS'17 are quite orthogonal to the method considered in the paper. Nevertheless, these methods are strong baselines and improving them is a good indicator of the significance of the proposed method of pruning input channels.", "We very much appreciate your careful review. We clarify the questions point by point below and plan to sort out some confusions in our revision to improve the clarity.\n\n> “My first concern is that this paper exceeds the recommended 8 page limit for reasons that are seemingly quite unnecessary. There are no large, essential figures/tables, and nearly the first 6 pages is just introduction and background material.”\n\nWe think you refer to the Section 3.\nIn Section 3, we show how the energy of a DNN inference is analytical modelled. We want to include these details in the paper because they form the final energy constraint proposed in problem (18). In the revised version, we will take your suggestion to reduce the number of pages to 8 by condensing this section and moving the details of energy estimation into the Appendix.\n\n\n> “Likewise the paper consumes a considerable amount of space presenting technical results related to knapsack problems and various epsilon-accurate solutions, but this theoretical content seems somewhat irrelevant and distracting since it is not directly related to the greedy approximation strategy actually used for practical deployment. Much of this material could have been moved to the supplementary so as to adhere to the 8 page soft limit.”\n\nThanks for your suggestion how to reduce the length to 8 pages. Please allow us to clarify the logic of our theorems first. All three algorithms are related to how to solve the key projection step in (22). Theorem 1 shows the projection problem in (22) is NP hard to find the exact optimal solution in general, since it is equivalent to a 0/1 knapsack problem. Theorem 2 shows the optimal computational complexity to find an epsilon *approximate* solution by utilizing the structure of the projection problem. Theorem 3 shows the proposed greedy algorithm (weighted projection algorithm) can achieve a reasonable precision efficiently. We feel that these theorems are useful in that they help understand the difficulty and the complexity of solving (22). We will consider moving Theorem 2 to the supplement in the first priority to shrink the length of this paper.\n\n\n> “Prior work also uses a mask for controlling the sparsity of network inputs, such as \"Structured Bayesian Pruning via Log-Normal Multiplicative Noise,\" NIPS 2017 and Louizos et al., \"Bayesian Compression for Deep Learning,\" NIPS 2017. How do you compare with them?”\n\nWe agree that there is prior work that uses a mask to prune the network activations (i.e., inputs). But we want to emphasize two key differences of our work. First, these two papers you mentioned use the mask (structure sparsity) to remove unnecessary channels; whereas our work uses the mask to filter unimportant elements within each channel, motivated by the observation that many areas in the input image do not really contribute to the recognition task such as the corners of the input image in the digital number recognition. These two mask techniques are orthogonal, and can even be combined.\n\nSecond, our mask model is integrated with an energy model to let us train energy-constrained DNNs; whereas these two papers purely aim at reducing the network parameters to get speedup.\n\nWe use their released code to train energy-constrained DNNs on the MNIST dataset, the results are below:\n+--------------------------------------+----------------------+----------+-----------------------------+\n| Method | Accuracy Drop | Energy | Width of Each Layer |\n+--------------------------------------+----------------------+----------+-----------------------------+\n| [Louizos et al., NIPS'17] | 2.2% | 26% | 4-6-52-42 |\n+--------------------------------------+----------------------+----------+-----------------------------+\n| [Neklyudov et al., NIPS'17] | 1.5% | 22% | 3-10-23-28 |\n+--------------------------------------+----------------------+----------+-----------------------------+\nOur method has 0.5% accuracy drop with 17% energy cost, better than the two approaches.\n\n\n> “Comparison against methods newer than MP and SSL.”\n\nIn the experiment, we also compare against state-of-the-art pruning methods NetAdapt [Yang et al., ECCV 2018] and EAP [Yang et al., CVPR 2017] and show favorable results (Please refer to Table 1 and Table 2). SSL and MP are classic pruning techniques that represent a class of methods that use sparsity as the constraint (regularization). Indeed, EAP is the refined version of SSL and MP.", "> “Another comment I have regarding the experiments is that hyperparameters and the use of knowledge distillation were potentially tuned for the proposed method and then simultaneously applied to the competing algorithms for the sake of head-to-head comparison. But to me, if these enhancements are to be included at all, tuning must be done carefully and independently for each algorithm. Was this actually done? Moreover it would have been nice to see results without the confounding influence of distillation to isolate sources of improvement, but no ablation studies were presented.”\n\nIn our early experiments, we did not use knowledge distillation in other methods and found that the performance is significantly worse than ours. Therefore, we apply the knowledge distillation in all the methods for fair comparison. Recent work (e.g., [Mishra et al., 2018] ) also support our observation that the knowledge distillation trick can significantly improve the test accuracy in other pruning methods. We verified that the performance was not very sensitive to the value of lambda as long as lambda is in a reasonable range. Therefore, we empirically choose lambda to be 0.5 universal to *all* datasets.\n\n[Mishra et al., 2018] Mishra, Asit, and Debbie Marr. \"Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy.\" In ICLR 2018.\n\n\n> “Finally, regarding the content in Section 5, the paper carefully presents an explicit bound on energy that ultimately leads to a constraint that is NP-hard just to project on to, although approximate solutions exist that depend on some error tolerance. However, even this requires an algorithm that is dismissed as \"complicated.\" Instead a greedy alternative is derived in the Appendix which presumably serves as the final endorsed approach. But at this point it is no longer clear to me exactly what performance guarantees remain with respect to the energy bound. Theorem 3 presents a fairly inscrutable bound, and it is not at all transparent how to interpret this in any practical sense. Note that after Theorem 3, conditions are described whereby an optimal projection can be obtained, but these seem highly nuanced, and unlikely to apply in most cases.”\n\nThe conditions under Theorem 3 are sufficient conditions to obtain the exactly optimal projection, i.e. the error bound is 0. However, we usually do not require such rigorous result in practice. Because the amount of parameters is very large in DNNs, the remaining budget R(W’’) is usually very small compared to E_budget. Therefore, the projection error bound is small enough in most cases.\nAnother practical aspect of Theorem 3 is quantifying the upper-bound of the projection error in (27). In practice, we can exactly calculate this error bound and even choose to use the more accurate (but slower) algorithm in Theorem 2 when this error bound is not acceptable.\n\n\n> “Additionally, it would appear that crude bounds on the energy could also be introduced by simply penalizing/constraining the sparsity on each layer, which leads to a much simpler projection step. For example, a simple affine function of the L0 norm would be much easier to optimize and could serve as a loose bound on the energy, given that the latter should be a non-decreasing function of the L0 norm. Any idea how such a bound compares to those presented given all the approximations and greedy steps that must be included?”\n\nTo use a method based on sparsity constraint of each layer, one must identify the sparsity bound for each of the DNN layers in a way that the whole model satisfies the energy budget while minimizing the loss. Even if an affine function of a layer’s sparsity bound can be used to estimate the layer’s energy, we still need to optimize these sparsity variables collectively across all layers for the whole model. Thus, the effectiveness of the layer-wise approach rests upon if we could find the optimal sparsity combination for all the layers.\n\nNetAdapt [Yang et al., ECCV 2018] and AMC [He et al., ECCV 2018] already showed that it is non-trivial to find the optimal layer-wise spartisy bounds. NetAdapt proposed a heuristic-driven search algorithm. In our experiment, we compared against NetAdapt and show that we can achieve higher accuracy with lower or same energy consumption (Please see Table 1 and Table 2).", "> “As an implementation heuristic, the proposed Algorithm 1 gradually decays the parameter q, which controls the sparsity of the mask M. But this will certainly alter the energy budget, and I wonder how important it is to employ a complex energy constraint if minimization requires this type of heuristic.”\n\nThe purpose of our proposed energy constraint is to exactly characterize the dependence between the sparsity of all parameters and the energy consumption, which provides us an (almost) exact energy model and a clear goal to guide us to pursue an energy efficient model. However, due to the nontrivial structure in the energy model, we have to involve some heuristics to solve it approximately. \n\n\n> “I did not see where the quantity L(M,W) embedded in eq. (17) was formally defined, although I can guess what it is.”\n\nThanks for pointing this out. L is the original loss, e.g., cross-entropy loss for classification. We will clarify this in the revision.\n\n\n> “In general it is somewhat troublesome that, on top of a complex, non-convex deep network energy function, just the small subproblem required for projecting onto the energy constraint is NP-hard. Even if approximations are possible, I wonder if this extra complexity is always worth it relative so simple sparsity-based compression methods which can be efficiently implemented with exactly closed-form projections.”\n\nAlthough the energy constrained problem is complex, our main contribution is to simplify it and propose an efficient method to solve it approximately. We measure the wall-clock time of the projection step, and across AlexNet, Squeezenet, and MobileNetV2, the projection step can be solved extremely efficiently -- within 0.2 seconds to be exact. Please also see our response to the 3rd question from Reviewer 3.\n\nIn addition, at the technique-level, using a simple sparsity-based compression method to train energy-constrained DNNs would require setting the sparsity threshold for each layer to satisfy the energy constraint while minimizing the loss. Such a hyper-parameter tuning is not trivial. We compare against one such method (NetAdapt) and demonstrate higher accuracy with lower/same energy (Please see Table 1 and Table 2).\n\n\n> “In Table 1, the proposed method is highlighted as having the smallest accuracy drop on SqueezeNet. But this is not true, EAP is lower. Likewise on AlexNet, NetAdapt has an equally optimal energy.”\n\nIn Table 1, our evaluation methodology is to configure our method to have an energy that is *the same as or lower than the lowest energy of prior work*, and compare the accuracy drops. In the case of AlexNet, our approach has a lower accuracy drop compared to NetAdapt at the same energy consumption. In the case of SqueezeNet, we show that our approach has the lowest energy among all the methods with only 0.3% higher accuracy drop than EAP. In Figure 2, we perform a comprehensive study where we vary the energy consumption of our method. We show that our method can train a network that has lower energy and less accuracy drop (the rightmost solid blue square) compared to EAP.\n\nWe will clarify our writing in the revision.", "Thanks for your thoughtful comments. The posted questions are answered as follows.\n\n> “The experiments in Sec. 6.2 suggest that the activation mask is mainly beneficial when the data is highly structured. How are the benefits (in terms of weight and activation sparsity) composed in the experiments on Imagenet? How does the weight sparsity of the the proposed method compare to the related methods in these experiments? Is weight sparsity in these cases a good proxy for energy consumption?”\n\nAs the reviewers pointed out, the activation mask applies to cases where the data is highly structured. It does not apply to data from ImageNet. We acknowledge at the beginning of Section 3.2 that “We do not claim that applying input mask is a general technique; rather, we demonstrate its effectiveness when applicable.”\n\nIn this work sparsity is not the end goal. Rather, it is a byproduct of energy saving. In fact, we observe that weight spartisy is *not* a good proxy for energy consumption, as also confirmed by prior work EAP [Yang et al., CVPR 2017]. Our method achieves lower energy consumption despite having higher density. The sparsity result on ImageNet is shown as follows. We will add the results in the revision.\n+-------------------------+--------------------------------+---------------------------------+------------------------+\n| DNNs | AlexNet | SqueezeNet | MobileNetV2 |\n+-------------------------+--------------------------------+---------------------------------+------------------------+\n| Methods | MP | SSL | EAP | Ours | MP | SSL | EAP | Ours | MP | SSL | Ours|\n+-------------------------+------+-------+------+--------+-------+-------+------+--------+-------+-------+-------+\n| Weights Sparsity | 8% | 35% | 9% | 31% | 34% | 61%| 28%| 48% | 52% | 63%| 63%|\n+-------------------------+------+-------+------+--------+-------+-------+------+------+----------+------+-------+\n\n\n> “How does the activation sparsity (decay) parameter (\\delta) q affect the accuracy-energy consumption tradeoff for the two data sets?”\n\nThe decay parameter $\\delta q$ is used to make the tradeoff between training time and accuracy. Smaller $\\delta q$ leads to better accuracy, however, we need to run more outer loops of Algorithm 1. As shown in Algorithm 1, the outer loop is time consuming since it requires training of both W and M. Although smaller $\\delta q$ could improve the accuracy of our method, we simply set $\\delta q = 0.1|M|$ in all the experiments.", "> “The authors show that the weight projection problem can be solved efficiently. How does the guarantee translate into wall-clock time?”\n\nThe most time consuming part of our proposed projection method is sorting the “profit density” in Algorithm 2. This sorting takes O(n logn) theoretical time complexity (n is the number of weights in DNN), and can be efficiently computed on GPUs using dedicated CUDA libraries. \nWe measured the wall-clock time of our projection algorithm on a GPU server (CPU: Xeon E3 1231-v3, GPU: GTX 1080 Ti), and the result is (the time is averaged over 100 iterations):\n+-------------------------------------+------------+------------------+--------------------+\n| DNNs | AlexNet | SqueezeNet | MobileNetV2 |\n+-------------------------------------+------------+------------------+--------------------+\n| Projection Time (seconds) | 0.170 | 0.023 | 0.032 |\n+-------------------------------------+------------+------------------+--------------------+\nAs the data shows, the projection step can be solved very efficiently. We will include these results in the revision.\n\n\n\n> “Filter pruning methods [1,2] reduce both the size of the weight and activation tensors, while not requiring to solve a complicated projection problem or introducing activation masks. It would be good to compare to these methods, or at least comment on the gains to be expected under the proposed energy consumption model.”\n\nFilter pruning methods [1,2] require a sparsity ratio to be set for each layer, and these sparsity hyper-parameters will determine the energy cost of the DNN. Manually setting all these hyper-parameters in energy constrained compression is not trivial. NetAdapt [Yang et al., 2018] proposes a heuristic-driven approach to search such sparsity ratios and use filter pruning as proposed in [2] to train DNN models. In the paper, we directly compared against NetAdapt, and show that we can achieve higher accuracy with lower/same energy consumption. Please see Table 1 and Table 2.\n\n\n> “Knowledge distillation has previously been observed to be quite helpful when constraining neural network weights to be quantized and/or sparse, see [3,4,5]. It might be worth mentioning this.”\n\nThank you for pointing out this point. We did notice several recently papers that use knowledge distillation for quantization and compression, and we will emphasize this with the suggested references in the revision.", "Thanks for your comments on our paper.\n\n> “‘Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim’. Could the authors please elaborate on this sentence?”\n\nScaleSim simulates the DNN hardware execution cycle by cycle, from which it derives the total execution time and energy consumption of executing a network on the hardware. In this paper, we model the energy consumption of an network analytically (Section 3, in particular Equation 16); we compare the energy consumption analytical derived by our approach with the energy consumption estimated from ScaleSim (which simulates the hardware executions), and found that the two matched.\n\n\n> “One of the main assumptions is the following. If the value of the data is zero, the hardware can skip accessing the data. As far as I know, this is a quite strong assumption, that is not supported by many architectures. How do the authors take into account overhead of using sparse data formats in such hardware in their estimations? Is it possible to simulate such behavior in ScaleSim? Moreover, in many modern systems DRAM can only be read in chunks. Therefore it can decrease number of DRAM accesses in (4).”\n\nIn many today’s DNN hardware the activations and weights are compressed in the dense form, and thus only non-zero values will be accessed. This is done in prior work [Chen et al., 2016; Parashar et al., 2017]. There is a negligible amount of overhead to “unpack” and “pack” compressed data, which we simply take away from the energy budget as a constant factor. This is also the same modeling assumption used by EAP [Yang et al., CVPR 2017].\n\nWe agree with the reviewer that DRAM is accessed in bursts, which we did account for in our modeling. In particular, the per-access energy eDRAM we used in the modeling is the amortized energy of each access across the entire bursts. That is, instead of decreasing the number of DRAM accesses, we decrease the per-access energy. This is a standard modeling assumption widely used in the hardware architecture community and industry [Han et al, ISCA 2016; Yang et al., CVPR 2017].", "The paper is dedicated to energy-based compression of deep neural networks. While most works on compression are dedicated to decreasing the number of parameters or decreasing the number of operations to speed-up or reducing of memory footprint, these approaches do not provide any guarantees on energy consumption. In this work the authors derived a loss for training NN with energy constraints and provided an optimization algorithm for it. The authors showed that the proposed method achieves higher accuracy with lower energy consumption given the same energy budget. The experimental results are quite interesting and include even highly optimized network MobileNetV2.\n\nSeveral questions and concerns.\n‘Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim’. Could the authors please elaborate on this sentence?\n\nOne of the main assumptions is the following. If the value of the data is zero, the hardware can skip accessing the data. As far as I know, this is a quite strong assumption, that is not supported by many architectures. How do the authors take into account overhead of using sparse data formats in such hardware in their estimations? Is it possible to simulate such behavior in ScaleSim? Moreover, in many modern systems DRAM can only be read in chunks. Therefore it can decrease number of DRAM accesses in (4).\n\nSmall typos and other issues:\nPage 8. ‘There exists an algorithm that can find an an \\epsilon’\nPage 8.’ But it is possible to fan approximate solution’\nPage 4. It is better to put the sentence ‘where s convolutional stride’ after (2).\nIn formulation of the Theorem 3, it is better to explicitly state that A contains rational numbers only since gcd is used.\nOverall, the paper is written clearly and organized well, contains interesting experimental and theoretical results.\n" ]
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_BylBr3C9K7", "ByxoQev_T7", "iclr_2019_BylBr3C9K7", "H1em0nLOTm", "BJgr8CIOpX", "r1xncHxi2X", "r1xncHxi2X", "r1xncHxi2X", "ByeTwU9O3m", "ByeTwU9O3m", "ByeuQTS92Q", "iclr_2019_BylBr3C9K7" ]
iclr_2019_BylE1205Fm
Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer
We study the problem of learning to map, in an unsupervised way, between domains A and B, such that the samples \vb∈B contain all the information that exists in samples \va∈A and some additional information. For example, ignoring occlusions, B can be people with glasses, A people without, and the glasses, would be the added information. When mapping a sample \va from the first domain to the other domain, the missing information is replicated from an independent reference sample \vb∈B. Thus, in the above example, we can create, for every person without glasses a version with the glasses observed in any face image. Our solution employs a single two-pathway encoder and a single decoder for both domains. The common part of the two domains and the separate part are encoded as two vectors, and the separate part is fixed at zero for domain A. The loss terms are minimal and involve reconstruction losses for the two domains and a domain confusion term. Our analysis shows that under mild assumptions, this architecture, which is much simpler than the literature guided-translation methods, is enough to ensure disentanglement between the two domains. We present convincing results in a few visual domains, such as no-glasses to glasses, adding facial hair based on a reference image, etc.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. The proposed method performed well on 3 visual content transfer problems. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - The paper is hard to follow at times - The problem being addressed is technically interesting but not well-motivated. That is, the question "why is this of interest to the ICLR community" was not well-answered. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. There were no major points of contention. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers reached a consensus that the paper should be accepted.
test
[ "Hyeo8WP92Q", "Hkg9D83rTm", "S1xt9pjBpX", "rylZpTorpQ", "rygvXKzET7", "rJxq6Tr627", "BylwDt7yaQ" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes an unsupervised style transfer method uses two-pathway encoder and a decoder for both domains. The loss function can be written using reconstruction losses and the confusion term. Experimental results are very promising comparing to state of the art methods. \n\nThe methodology presented in this paper is simple yet powerful according to the experimental results. However I do have a few concerns: \n\n1. The writing can certainly be improved. I had a difficult time understanding Section 2. For example the function Q is upper cased but later the f and g are all lower cased. Why domains A and B are defined using the space and the probability measure? \"our framework assumes that the distribution of persons with sunglasses and that of persons without them is the same,\" The \"distribution of persons\" is not a rigorous definition and is hard to infer what does it actually mean. \"f\" does not appear in the loss terms although it appears under \"min\". \n\n2. I like the simplicity of the objective function, but it is hard for me to understand that why the algorithm does not pick up spurious differences between A and B. For example, what if there are lighting differences and glasses/no-glasses differences between A and B? See 3rd row of figure 2 for an example. \n\n3. Given the huge differences in performance between the proposed method and MUNIT and DRIT, some analysis/discussion on the reason of success/failure should be given.\n\n--------------------------------------------------------\n\nI have read authors' response. ", "Dear reviewers,\n\nWe have revised the manuscript to accommodate for the comments by all reviewers. Specifically, following the comment of AnonReviewer4, a new experiment comparing to the Fader networks was added, as well as run-time statistics. We have also clarified the text where requested.\n\nAll new content is marked in red.\n\nWe would like to thank the reviewing team again and to add a request. We believe that the sentiment of the reviews is positive and that we were able to respond to all concerns with the type of results that would satisfy the reviewers. With the CVPR deadline approaching, we would appreciate an early indication by AnonReviewer3 and AnonReviewer4 on the appropriateness of our response. \n\nThank you,\n\nThe authors\n", "Thank you very much for your supportive comments. Below, we address your comments one by one. Please let us know if this does not clarify your concerns.\n\nSpecific task: the task we tackle is widely applicable. However, we tested it on images due to the availability of accessible data. Other examples in which the method can be applied include music datasets, where a musical instrument is added, computer design, where one wishes to add elements to a blueprint, the addition of certain style elements to text, and so on.\n\nThe same argument used by the authors of Fader networks (NIPS, 2017) applies here: “A key advantage of our method compared to many recent models is that it generates realistic images of high resolution without needing to apply a GAN to the decoder output. As a result, it could easily be extended to other domains like speech, or text, where the backpropagation through the decoder can be really challenging because of the non-differentiable text generation process for instance.“\n\nNote that we are at least as general as the Fader networks. While they subtract content or add generic content, we allow the addition of specific ones. In order to demonstrate this, we have added to the revised version an experiment where we employ our method to remove content. \n\nMore specifically, Fader networks cannot employ guidance to add glasses (or other features). Therefore, the task we compare with Fader networks on, is the one of removing glasses. In our case, we use the trained networks and follow a straightforward pipeline: we embed a picture of a person with glasses, zero the part that corresponds to glasses and decode the obtained representation. The revised version contains, in a new part of Sec. 5, qualitative results as well as a quantitative evaluation and a user study.\n\nThis experiment can be seen as a specific instance of “semi-supervised source separation”, which is researched problem by itself [Smaragdis et al, \"Supervised and semi-supervised separation of sounds from single-channel mixtures.\" In Int. Conf. on ICA and Signal Separation, 2007]. \n\nTo show the applicability to music, we present here an initial result in which the two domains are Jazz music with and without percussion instruments. In the preliminary example below we show how we add the drums, as they appear in one music segment, to a musical segment without drums. This is done in the spectral domain and the quality is therefore limited.\n\nReference source with no percussion: https://instaud.io/2Uk5 \nGuide (jazz music with percussion): https://instaud.io/2UjY \nDrums from the second sample added to the first: https://instaud.io/2Uk7 \n\nReviewer: \nIn several places the paper claims that the proposed approach is considerably simpler. Some parts hint to criteria for the ‘complexity’ comparison, such as Table 1 or a few sentences (e.g. “this allows us to train with many less parameters and without the need to applying excessive tuning”). It would be more convincing to have a dedicated discussion of the practical advantages of the simplicity claimed by this method, discussing e.g. training/testing time, memory footprint of the models, convergence properties, stability, etc. \n\nAnswer:\nTab. 1 compares the various methods with respect to the number of networks and the type of representation sharing used. More networks and less sharing leads to a much more complex optimization problem.\n\nFollowing the review, we added Tab. 2 in the revised version, which directly compares the runtime and memory footprint of each method. In addition to the results in the table, we note that we use the same hyperparameters throughout all experiments (code is publically available), which leads us to believe that the simplicity of our method results in added robustness.\n\nReviewer: \nThe chosen baselines, i.e. MUNIT and DRIT are experimentally shown to perform poorly on the considered task. Yet although these methods were also developed for guided image translation, they were designed for a rather different application: style transfer. I am not sure these comparisons bring much insight on the performance of the method.Experiments are conducted for a very specific task, on a single dataset. Would the method have broader application?\n\nAnswer:\nWe compare to MUNIT and DRIT since these are the closest methods in the literature. We have added above experiments comparing aspects of our method with the Fader networks.\n\nReviewer:\nI understand that such an approach is difficult to evaluate quantitatively but I am not sure what there is to learn from experiments reported in Table 3, as there is no point of comparison on this task. This could be clarified. \n\nAnswer: \nThis experiment was added to further complement the user study in the previous table of the original manuscript. Since there is no method that can be used as a point of comparison (Fader networks do not use a guided image), and following the review, the table removed in the revised version. \n\n(continued below)", "Reviewer: \nThe paper relies on the assumption that the distribution of persons with sunglasses and that of persons without them is the same, except for the sunglasses. This sounds like a strong requirement for the data used to train the network; it would be interesting to discuss the practical impact of this assumption, especially on the data requirement for the method to perform well\n\nAnswer: \nWe did not try to enforce this requirement in any way and use all benchmarks as is. \n\nThe assumption is required for the theoretical analysis. Without it, we would need an additional term that reflects the divergence between the distributions.\n\nReviewer:\nI got confused with some of the claims in section 4.2. More generally, I found the technical part hard to follow.\n\nAnswer: \nTo increase the readability of Sec. 4.2, we presented each result both informally and formally. In the revised version, following the review, we have added additional clarifications. While the new text is technical, we hope that the arguments it contains are easy to follow.\n\nReviewer:\nThe user study seems small: only 10 pairs of images are considered. How were those pair chosen? Is the set representative?\n\nAnswer: \nWe considered 10 random pairs of images per each of the three transformations. The reason that we did not use more is that we wanted all users to see all the pairs (following the protocol used by CycleGAN) and did not want to have the user studies longer than they already are.\nIn the new user study (comparing to Fader networks), each user received a random subset of the test set.\n", "This paper tackles the task of content transfer. For a given type of images (frontal face shots), the goal is to transfer a particular localized property (e.g. glasses or facial hair) extracted from one image to another image of the same type (difference face). This is also known as the problem of guided image-to-image translation. \nThe problem is formalized as the one of learning to map two different domains, one domain being composed of images with the property/attribute of interest, the other one containing images without it. The problem is said to be ‘unsupervised’, i.e. there is no pairwise correspondences between images of the two domains (with/without attributes).\nThe novelty of the approach lies on\n-\tthe loss, which is composed of three terms: two reconstruction losses and a domain confusion loss\n-\tthe overall architecture and in particular the fact that images are represented as a combination of the output of two encoders: one encodes the face and the other encodes the property (e.g. glasses).\n\nOverall comments:\n+ a theoretical part discusses generalization bounds and the emergence of disentangled representations\n+ visual results are appealing showing the suitability of the method to the considered task\n- the discussion of the advantages of the proposed method could be improved\n- the motivation for some of the experimental results is unclear (choice of experimental protocol and baselines). \n- the scope of the method seems limited\n\nDetailed comments:\n\nI personally like the described model. The disentanglement mechanism is intuitive to understand, and seems well suited for this particular task, as qualitative evidence suggests. I am not sure if this approach would be applicable beyond the very specific scenario considered in the paper. \n\nThe paper emphasizes that the strength of the method lies on its simplicity w.r.t. competitors, and its better results. These two aspects could be better discussed. \n\nSimplicity: \nIn several places the paper claims that the proposed approach is considerably simpler. Some parts hint to criteria for the ‘complexity’ comparison, such as Table 1 or a few sentences (e.g. “this allows us to train with many less parameters and without the need to applying excessive tuning”). It would be more convincing to have a dedicated discussion of the practical advantages of the simplicity claimed by this method, discussing e.g. training/testing time, memory footprint of the models, convergence properties, stability, etc. \n\nComparison: \nThe chosen baselines, i.e. MUNIT and DRIT are experimentally shown to perform poorly on the considered task. Yet although these methods were also developed for guided image translation, they were designed for a rather different application: style transfer. I am not sure these comparisons bring much insight on the performance of the method.\nExperiments are conducted for a very specific task, on a single dataset. Would the method have broader application?\n\nExperimental protocol:\nI understand that such an approach is difficult to evaluate quantitatively but I am not sure what there is to learn from experiments reported in Table 3, as there is no point of comparison on this task. This could be clarified. \n\nAdditional comments:\n-\tThe paper relies on the assumption that the distribution of persons with sunglasses and that of persons without them is the same, except for the sunglasses. This sounds like a strong requirement for the data used to train the network; it would be interesting to discuss the practical impact of this assumption, especially on the data requirement for the method to perform well\n-\tI found Figure 1 quite useful. A visual representation of the architecture and its associated description help follow the technical part. \n-\tI got confused with some of the claims in section 4.2. More generally, I found the technical part hard to follow.\n-\tThe user study seems small: only 10 pairs of images are considered. How were those pair chosen? Is the set representative?\n", "The paper proposes an unsupervised approach for mapping two sets of objects, A and B, such that set B contains all the information that is in set A and some additional information. The paper learns a latent space which encodes: (a) information which is shared in both sets, and (b) the additional content present in B. This is done by employing a two-pathway encoder and a decoder for both the sets. Experiments on problems such as adding glasses or facial hair to faces shows that the proposed method performs better than existing disentanglement approaches. ", "Thank you very much for your comments and for supporting our results and framework. Below, we address the raised concerns one by one. Please let us know if you are not satisfied with our replies.\n\nConcern #1:\n\nWe went to great lengths to adhere to the math style recommended provided this year by the ICLR program chairs and have employed the suggested conventions from math_commands.tex.\n\nReviewer: The function Q is uppercased but later f and g are all lower-cased. \n\nAnswer: We wanted to create a clear distinction between real networks that are learned (f,g) and an unknown underlying representation Q. We would be happy to change this. \n\nReviewer: Why domains A and B are defined using the space and the probability measure? \n\nAnswer: This is the conventional formal approach for defining a domain from which the samples are being selected i.i.d in some sample space. This is the common assumption in machine learning, e.g., Vapnik (2000), Bousquet and Alisseeff (2001), and it is required for the theoretical results.\n\nReviewer: \"our framework assumes that the distribution of persons with sunglasses and that of persons without them is the same,\" The \"distribution of persons\" is not a rigorous definition and is hard to infer what does it actually mean.\n\nAnswer: We should have been more careful and discussed “distribution of images of persons” and not “distribution of persons.” This sentence is an example given after the rigorous definition. Specifically, we wanted to explain Eq. 2 using our running example. The entire paragraph reads “Note that within Eq. 2, there is an assumption on the underlying distributions D_A and D_B. Using the concrete example, our framework assumes that the distribution of persons” etc.\n\nReviewer: \"f\" does not appear in the loss terms although it appears under \"min\".\n\nAnswer: f is defined 2 lines above (see Eq. 5) as f(a,b) = (e_1(a),e_2(b)) and the loss terms includes e_1 and e_2.\n\n Concern #2:\n\nReviewer: ...why the algorithm does not pick up spurious differences between A and B. For example, what if there are lighting differences and glasses/no-glasses differences between A and B? \n\nAnswer: We divide our answer to two parts: common to many methods and specific to our method. It should be noted that while guided mapping occurs for individual images but is based on a preliminary training of unlabeled and unmatched images from the two domains.\n\n(i) Similarly to many other A to B mapping methods in the literature, the algorithm learns what differs between the domains based on the examples of the training set. Given a large enough sample size, the spurious differences are not as consistent as the target difference. In other words, using the concrete example given in the question, differences in lighting appear in both images with glasses and images without, and are therefore encoded in the common part of the representation.\n\n(ii) In our method, this effect is amplified. The representations of A and B are asymmetric and the network, by design, assigns to images in B content that is not present in A. When this content is removed, a loss (Eq. 9) ensures that we obtain images that are indistinguishable from images in A.\n\nConcern #3:\n\nReviewer: Given the huge differences in performance between the proposed method and MUNIT and DRIT, some analysis/discussion on the reason of success/failure should be given.\n\nAnswer: We explicitly mention in the paper: “The type of guiding that is obtained from the target domain in MUNIT is referred to as style, while in our case, the guidance provides content. Therefore, MUNIT, as can be seen in our experiments, cannot add specific glasses, when shifting from the no-glasses domain to the faces with eyewear domain.”\n\nIn the next version, we will make sure to elaborate on this. The MUNIT and DRIT architectures lead the methods to focus on conditional style, i.e. global changes to the picture, while our method focuses on local changes (content). Therefore when given two images, MUNIT and DRIT look at the reference picture and pick up global “style” characteristics, such as background or lighting, while we are able to capture the added content.\n\nMUNIT and DRIT both use two different types of encoders that enforce a separation of the latent space representations to either style or content vectors. For example, the style encoder, unlike the content encoder, employs spatial pooling. It also results in a smaller representation than the content one. This is important, in the context of these methods, in order to ensure that the two representations encode different aspects of the image. If MUNIT/DRIT were to use the same type of encoder twice, then one encoder could capture all the information and the image-based guiding (mixing representations from two images) would become mute.\n\nIn contrast, our method (i) does not separate style and content, and (ii) as mentioned above, has a representation that is geared toward capturing the additional content." ]
[ 6, -1, -1, -1, 6, 6, -1 ]
[ 2, -1, -1, -1, 3, 1, -1 ]
[ "iclr_2019_BylE1205Fm", "iclr_2019_BylE1205Fm", "rygvXKzET7", "S1xt9pjBpX", "iclr_2019_BylE1205Fm", "iclr_2019_BylE1205Fm", "Hyeo8WP92Q" ]
iclr_2019_BylIciRcYQ
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks. However, there is still a lack of understanding on how and why SGD can train these complex networks towards a global minimum. In this study, we establish the convergence of SGD to a global minimum for nonconvex optimization problems that are commonly encountered in neural network training. Our argument exploits the following two important properties: 1) the training loss can achieve zero value (approximately), which has been widely observed in deep learning; 2) SGD follows a star-convex path, which is verified by various experiments in this paper. In such a context, our analysis shows that SGD, although has long been considered as a randomized algorithm, converges in an intrinsically deterministic manner to a global minimum.
accepted-poster-papers
The proposed notion of star convexity is interesting and the empirical work done to provide evidence that it is indeed present in real-world neural network training is appreciated. The reviewers raise a number of concerns. The authors were able to convince some of the reviewers with new experiments under MSE loss and experiments showing how robust the method was to the reference point. The most serious concerns relate to novelty and the assumptions that individual functions share a global minima with respect to which the path of iterates generated by SGD satisfies the star convexity property. I'm inclined to accept the authors rebuttal, although it would have been nicer had the reviewer re-engaged. Overall, the paper is on the borderline.
val
[ "H1eq2Hdo07", "B1xsEN2d37", "H1g6MIMIAm", "Hyg7evSKnQ", "HkgubIm-Am", "B1equOGWCm", "SJxGnBGW07", "HyxZqj0O27" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "While our submission is under review, a few related but very different studies [1,2,3] were posted on Arxiv very recently. We would like to briefly clarify our difference from these results here, which we will further include into the future version of this paper. \n\nThe studies [1,2,3] proved the optimization convergence results of gradient-based algorithms in over-parameterized deep learning based on various technical assumptions about overparameterization and algorithm parameters, which are still subject to validations in deep learning practice in the future. As a comparison, our proof of convergence is based on the star-convexity property, which we have verified in training a variety of practical neural networks on real datasets. The star-convexity property by itself is a new finding and can be of independent interest. Furthermore, our result is on the convergence of parameters, which in nature is different from the results in [1,2,3] on the convergence of the loss function value.\n\n[1]``Gradient Descent Finds Global Minima of Deep Neural Networks’’, Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, Xiyu Zhai\n\n[2] ``A Convergence Theory for Deep Learning via Over-Parameterization’’, Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song\n\n[3] ``Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks’’,\nDifan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu.\n", "This paper analyzed the global convergence property of SGD in deep learning based on the star-convexity assumption. The claims seem correct and validated empirically with some observations in deep learning. The writing is good and easy to follow.\n\nMy understanding of the analysis is that all the claims seem to be valid when the solution is in a wide valley of the loss surface where the star-convexity holds, in general. This has been observed empirically in previous work, and the experiments on cifar10 in Fig. 2 support my hypothesis. My questions are:\n\n1. How to guarantee the star-convexity will be valid in deep learning?\n2. What network or data properties can lead to such assumption?\n\nAlso, this is a missing related work from the algorithmic perspective to explore the global optimization in deep learning: \n\nZhang et. al. CVPR'18. \"BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning\".\n", "I thank authors for carefully addressing my concerns and I raise my score accordingly. \n\nAll the additional experiments are very interesting. The experiments of different reference point in Fig.5 suggests that the phenomenon of star-convexity is quite stable as long as the iterate are in a region \"closed\" to the minimum. The experiments on the norm of the iterate somehow supports this observation, since the change in the norm becomes less significant after some iteration, i.e. stableness of the last iterates. \n\nMoreover, according to Fig.5, it seems like a phase transition occurs after some iterations. In particular, the behavior of the first iterations are quite different from the latest ones, hence it would be interesting to develop some further characterization of such phase transition. \n", "The paper proposes a new approach to explain the effective behavior of SGD in training deep neural networks by introducing the notion of star-convexity. A function h is star-convex if its global minimum lies on or above any plane tangent to the function, namely h* >= h(x) + < h'(x), x*-x> for any x. Under such condition, the paper shows that the empirical loss goes to zero and the iterates generated by SGD converges to a global minimum. Extensive experiments has been conducted to empirically validate the assumption. \n\nThe paper is very well organized and is easy to follow. The star-convexity assumption is very interesting which provides new insights about the landscape of the loss function and the trajectory of SGD. It is in general difficult to theoretically check this condition so several empirical verifications has been proposed. My main concern is about these empirical verifications.\n\n1) The minimum of the cross entropy loss lies at infinity \nThe experiments are performed respect to the cross entropy loss. However, cross entropy loss violates Fact 1 since for any finite weight, cross entropy loss is always strictly positive. Thus the zero is never attained and the global minimum always lies at infinity. As a result, the star-convexity inequality h* >= h(x) + < h'(x), x*-x> hardly makes sense since x* is at infinity and neither does the theorem followed. \nIn this case, a plot of the norm of xk is highly suggested since it is a sanity check to see whether the iterates goes to infinity. \n\n2) The phenomenon may depend on the reference point, i.e last iterate\nSince the minimum is never attained, the empirical check of the star-convexity maybe biased. More precisely, it might be possible that the behavior of the observed phenomenon depends on the reference point, i.e. the last iterate. Therefore, it will be interesting to see if the observed phenomenon still holds when varying the stopping time, for instance plot the star convexity check using the iterates at 60, 80, 100, 120 epochs as reference point. \n\nIn fact, the experiments shown in Figure 4 implicitly supports that the behavior may change dramatically respect to different reference point. The reason is that the loss in these experiments are far away from 0, meaning that we are far from the minimum, thus checking the star-convexity does not make sense because the star-convexity is only defined respect to the minimum. \n\nOverall, the paper provides interesting idea but the empirical results may be biased due to ill-posed problem ", "We thank the reviewer for the valuable feedbacks. \n\nThis paper aims at reporting an interesting star-convex property of the SGD optimization path that has been observed in training a variety of DL models, including MLP, CNN, residual networks and RNN (verified recently). Moreover, our theory is motivated by such a common observation and attempts to justify the role of this property plays in determining the convergence of the optimization in DL. \n\nOur response to the reviewer’s comments are provided as follows.\n\nQuality: In Fact 1: How can you conclude that the set of common global minimizers are bounded? \n\nA: We thank the reviewer for pointing out this. In fact, our theoretical results do not require that all the global minima be bounded. To be precise, we only need the star-convexity to hold for a bounded subset of the common global minimum, under which our theory guarantees that SGD converges to one of the elements in that set. We clarify this in the revision.\n\nClarity: On page 3, “fact” is the right word here. On Page 3 the x^* here is the last iteration produced by SGD. On page 4, the statement in definition 1 is more like a theorem.\n\nA: We thank the reviewer for valuable suggestions. In the revision, we use ``observation’’ instead of “fact”. We add the non-negativity assumption. We now refer to x^* as the output of SGD. We restate definition 1&2.\n\nSignificance 1) The analysis of this paper is based on ... \n\nA: We have added new experiments to demonstrate that for the star-convexity of SGD holds for the MSE loss function (which can achieve zero loss). We agree that cross-entropy achieves only near-zero loss, but such an approximation is not that unreasonable, as can be observed by the experiments that we added as Fig. 6 in Appendix D, which illustrates that the cross-entropy loss is nearly zero for certain finite norm of the weight parameters. Hence, we do expect that such approximation can convey useful information. \nWe believe that we should not restrict ourselves only to theory that exactly matches what happens in practice. Near-zero loss is widely observed for training over-parameterized neural networks. Thus, the common minimizer assumption is motivated by this observation, and has led us to discover the star-convexity of SGD paths empirically, and further develop the convergence of SGD based on such a property. Hence, the approximate common global minimizer does yield consistent practical and theoretical results that explain what happens in deep learning.\n\n2) Secondly, …. \n\nA: We first want to point out that the “epoch-wise star-convexity” is an accumulative effect of the residual error of every component loss over one epoch, which is a nontrivial and much weaker condition than that the entire loss function is star-convex over the points that SGD visits. Thus, assuming “epoch-wise star-convexity”, the proofs of theorems 1 and 2 do not follow the conventional convex analysis. One can of course argue that it is simple, but we think the focus here should be the information that it conveys in such a context.\nSecond, we report a star-convex path property over a wide range of DL training tasks, which have not been reported in the existing literature to our knowledge. We do think this is an informative discovery. Of course, understanding when and why such a property hold for DL training is definitely important and deserves exploration in the future work. \n\n3) In fact, it is well-known that SGD with constant step size … \n\nA: We want to point out the difference between our theory and the result mentioned by the reviewer. The star-convexity is much more relaxed than that the loss function F being strongly convex. Second, the bounded variance assumption that Var(g_k) <= M ||∇F(x_k)||^2 is hard to justify in general, and it is not clear to what extent can it be justified in DL tasks. Not to mention that it is clearly not true that the loss function is strong convex! In contrast, our star-convexity assumption is verified by various DL experiments as we report in the paper. Moreover, under strong convexity, traditional analysis only guarantees the convergence of the sequence in probability, which is much weaker than our deterministic convergence results in Theorem 3. \n\n4) With respect to the empirical evidence, ... \n\nA: We thank the reviewer for pointing out this, and we are aware of it. We use ReLU activation as it is commonly used in DL tasks. Of course, one can use a smoothed version of ReLU (softplus) and obtain nearly the same result. We clarified this and added more experiments on this in the revision.\nFor the variance, we do observe that the variance vanishes as SGD converges, and in fact report such a property as Corollary 1 in the originally submitted version. This is a necessary observation when SGD converges to a common global minimizer, and therefore also justifies the existence of common global minimum to some extent. We add experiments on this in the revision.\n", "We thank the reviewer for the valuable feedbacks. \n\nThis paper aims at reporting an interesting star-convex property of the SGD optimization path that has been observed in training a variety of DL models, including MLP, CNN, residual networks and RNN (verified recently). Moreover, our theory is motivated by such a common observation and attempts to justify the role of this property plays in determining the convergence of the optimization in DL. \n\nOur response to the reviewer’s comments are provided as follows.\n\n1. How to guarantee the star-convexity will be valid in deep learning?\n\nResponse: We thank the reviewer for pointing out this question. It is definitely interesting to explore the underlying mechanism that leads to such a common observation. We think that over-parameterization can be one of the important factors. We are currently investigating this issue theoretically on some simple networks, and our understanding so far favors such a direction.\n\n2. What network or data properties can lead to such assumption?\n\nResponse All our experiments are conducted on practical neural network training tasks with real datasets. From the experiments, we find that the property holds for a variety of network architectures (MLP, CNN, Inception, RNN) and different datasets (image, text, etc). We think that this can be an amenable property of over-parameterized network. \nIn fact, several recent works ([1,2]) show that the optimization trajectories of SGD is generally smooth despite the nonconvexity and depth of the networks, and our star-convexity property can be viewed as another aspect that further promotes theoretical justification to deep learning optimization. We will explore these two questions more in future work.\n\n3. There is a missing related work from the algorithmic perspective to explore the global optimization in deep learning: \nZhang et. al. CVPR'18. \"BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning\".\n\nResponse: We thank the reviewer for pointing out this interesting related work. We will cite this work in the upcoming revision. \n\n[1] Li et al. Visualizing the loss landscape of neural nets. To appear in NIPS 2018\n[2] Eliana Lorch. Visualizing deep network training trajectories with pca. In ICML Workshop on Visualization for Deep Learning, 2016.\n", "We thank the reviewer for the valuable feedbacks. \n\nThis paper aims at reporting an interesting star-convex property of the SGD optimization path that has been observed in training a variety of DL models, including MLP, CNN, residual networks and RNN (verified recently). Moreover, our theory is motivated by such a common observation and attempts to justify the role of this property plays in determining the convergence of the optimization in DL. \n\nOur response to the reviewer’s comments are provided as follows.\n\n1) The minimum of the cross entropy loss lies at infinity. It is a sanity to check whether the iterates goes to infinity. \n\n Response: We thank the reviewer for pointing out this, and we are aware of it. We choose to present the results on cross entropy as it is widely used in DL applications. In fact, we verified the star-convexity property for training other losses that by nature can achieve zero such as the MSE loss (please see the experiment results that we added in Fig.7 in Appendix D in supplementary).\nUnder the cross-entropy loss, we found that after the training loss is very close to zero, the l_2 norm of the corresponding iterate grows only logarithmically. This can be clearly seen from the experiments that we added in Fig.6 in Appendix D of supplementary. We further note that such a phenomenon has also been observed and justified in Fig.2 of [1]). Thus, empirically, it is reasonable to treat the loss value to be approximately zero (i.e., reaches minimum) with a bounded weight norm.\n[1] ``The Implicit Bias of Gradient Descent on Separable Data’’, Soundry et al 2018\n\n2) The phenomenon may depend on the reference point, i.e., last iterate\nSince the minimum is never attained, the empirical check of the star-convexity maybe biased. More precisely, it might be possible that the behavior of the observed phenomenon depends on the reference point, i.e. the last iterate. Therefore, it will be interesting to see if the observed phenomenon still holds when varying the stopping time, for instance plot the star convexity check using the iterates at 60, 80, 100, 120 epochs as reference point. \nIn fact, the experiments shown in Figure 4 implicitly supports that the behavior may change dramatically respect to different reference point. The reason is that the loss in these experiments are far away from 0, meaning that we are far from the minimum, thus checking the star-convexity does not make sense because the star-convexity is only defined respect to the minimum. \n\nResponse: As can be seen in the experiments that we added in Fig.5 in Appendix D of the supplementary materials, we have checked the star-convex property by taking the reference point at different intermediate iterates (60, 80, 100, 120) as the reviewer suggested. We found that star-convexity still holds under these choices of reference points, and therefore such a property does not depend on the choice of reference point so long as their loss are (nearly) zero. This observation is common for over-parameterized networks which can achieve near zero loss and therefore can have common global minimum.\nWe emphasize that the reference point in our star-convexity must be the minimizer that achieves zero loss. Hence, in experiments, we must set the reference point at the epochs where the corresponding loss is nearly zero. The points at intermediate iterates with a high loss value cannot be chosen as reference point of star-convexity, because these points cannot be treated as the common global minimizer.. \nRegarding the experiments in Figure 4, they are conducted on under-parameterized networks where (approximate) zero loss cannot be achieved, and the algorithm in fact does not find a common global minimum. This experiment is to justify the role of the over-parameterization (or the common global minimum) plays in determining the star-convex optimization path.\n", "This paper attempts to account for the success of SGD on training deep neural networks. Starting from two empirical observations: (1) deep neural networks can almost achieve zero training loss; (2) the path of iterates generated by SGD on these models follow approximately the “star convex path”, under the assumptions that individual functions share a global minima with respect to which the path of iterates generated by SGD satisfies the star convexity properties, the papers shows that the iterates converges to the global minima. \n\nIn terms of clarity, I think the paper can definitely benefit if the observations/assumptions/definitions/theorems are stated in a more formal and mathematically rigorous manner. For example:\n- On page 3, “fact 1”: I don’t think “fact” is the right word here. “Fact” refers to what has been rigorously proved or verified, which is not the case for what is in the paper here. I believe “observation” is more appropriate. Also the assumption that l_i is non-negative should be formally added.\n- On page 3, section 3.1: the x^* here is the last iteration produced by SGD. Then how can it be called the “global minima”? The caption of Figure 1 on page 4 is simply misleading.\n- On page 4, the statement in definition 1 is more like a theorem than a definition. It is giving readers the impression that any path generated by SGD satisfies the star-convex condition, which is not the case here. A definition should look like “we call a path generated by SGD a star-convex path if it satisfies …”. Definition 2 on page 6 has the similar issue.\n\nIn terms of quality, while I believe the paper is technically correct, I have one minor question here:\nPage 3, Fact 1: How can you conclude that the set of common global minimizers are bounded? In fact I don’t believe this is true at all in general. If you have a ReLu network, you can scale the parameters as described in [1], then the model is invariant. Therefore, the set of common minimizer is definitely NOT bounded. \n\nIn terms of significance, I think this paper is very interesting as it attempts to draw the connection between the aforementioned observations and the convergence properties of SGD. Unfortunately I think that this paper is less significant than it has appeared to be, although the analysis appears to be correct. \n\nFirst of all, all the analysis of this paper is based on one very important and very strong assumption, namely, all individual functions $l_i$ share at least one common global minimizer. The authors have attempted to justify this assumption by empirical evidences (figure 1). However, achieving near-zero loss is completely different from achieving exact zero because only when the model achieves exact zero can you argue that a common global minimizer exists. \n\nSecondly, the claim that the iterate converges to the global minima is based on the assumption that the path follows an “epoch-wise star-convex” property. From this property, it only takes simple convex analysis to reach the conclusion of theorem 1 and 2. Meanwhile, the assumption that the path does follow the “epoch-wise start-convex” properties is not at all informative. It is not clear why or when the path would follow such a path. Therefore theorem 1 and 2 are not more informative than simply assuming the sequence converges to a global minimizer. \n\nIn fact, it is well-known that SGD with constant stepsize converges to the unique minimizer if one assumes the loss function F is strongly convex and the variance of the stochastic gradient g_k is bounded by a multiple of the norm-square of the true gradient:\nVar(g_k) <= M ||∇F(x_k)||^2\nWhich is naturally satisfied if all individual functions share a common minimizer. Therefore, I don’t think the results shown in the paper is that surprising or novel. \n\nWith respect to the empirical evidence, the loss function l_i is assumed to be continuously differentiable with Lipschitz continuous gradients, which is not true for networks using ReLU-like activations. Then how can the paper use models like Alexnet to justify the theory? Also, if what the authors claim is true, then the stochastic gradient would have vanishing variance as it approaches x^*. Can the authors show this empirically?\n\nIn summary, I think this paper is definitely interesting, but the significance is not as much as it would appear.\n\nRef: \n[1] Dinh, L., Pascanu, R., Bengio, S., & Bengio, Y. (2017). Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933." ]
[ -1, 8, -1, 6, -1, -1, -1, 5 ]
[ -1, 4, -1, 4, -1, -1, -1, 5 ]
[ "iclr_2019_BylIciRcYQ", "iclr_2019_BylIciRcYQ", "SJxGnBGW07", "iclr_2019_BylIciRcYQ", "HyxZqj0O27", "B1xsEN2d37", "Hyg7evSKnQ", "iclr_2019_BylIciRcYQ" ]
iclr_2019_BylQV305YQ
Toward Understanding the Impact of Staleness in Distributed Machine Learning
Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication. In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale. Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments. In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates. Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature. The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\sqrt{T}).
accepted-poster-papers
The reviewers that provided extensive and technically well-justified reviews agreed that the paper is of high quality. The authors are encouraged to make sure all concerns of these reviewers are properly addressed in the paper.
test
[ "B1e7cN_6RX", "rJlVH62mo7", "rJgnEvDxCX", "SkgQs-Yg07", "ByeqAlqeC7", "ryeUVuDg0Q", "B1e0R75T2m", "SJxI4taJ2X" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "LSTM is indeed an interesting piece to add. We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14), and have observed that (1) staleness impacts deeper network variants more than shallower counterparts, which is consistent with our observation in CNNs and DNNs; (2) different algorithms respond to staleness differently, with SGD and Adam more robust to staleness than Momentum and RMSProp.", "This paper tries to analyze the impact of the staleness on machine learning models in different settings, including model complexity, optimization methods or the number of workers. In this work, they study the convergence behaviors of a wide array of ML models and algorithms under delayed updates, and propose a new convergence analysis of asynchronous SGD method for non-convex optimization.\n\nThe following are my concerns:\n1. \"For CNNs and DNNs, the staleness slows down deeper models much more than shallower counterparts.\" I think it is straightforward. I want to see the theoretical analysis of the relation between model complexity and staleness. \n2. \"Different algorithms respond to staleness very differently\". This finding is quite interesting. Is there any theoretical analysis of this phenomenon? \n3. The \"gradient coherence\" in the paper is not new. I am certain that \"gradient coherence\" is very similar to the \"sufficient direction\" in [1]. \n4. What is the architecture of the network? in the paper, each worker p can communicate with other workers p'. Does it mean that it is a grid network? or it is just a start network. \n5. in the top of page 3, why the average delay under the model is 1/2s +1, isn't it (s-1)/2? \n6. on page 5, \"This is perhaps not surprising, given the fact that deeper models pose more optimization challenges even under the sequential settings.\" why it is obvious opposite to your experimental results in figure 1(a)? Could you explain why shallower CNN requires more iterations to get the same accuracy? it is a little counter-intuitive.\n7. I don't understand what does \"note that s = 0 execution treats each worker’s update as separate updates instead of one large batch in other synchronous systems\" mean in the footnote of page 5.\n\n\nAbove all, this paper empirically analyzes the effect of the staleness on the model and optimization methods. It would be better if there is some theoretical analysis to support these findings.\n\n[1] Training Neural Networks Using Features Replay https://arxiv.org/pdf/1807.04511.pdf\n\n\n===after rebuttal===\nAll my concerns are addressed. I will upgrade the score.\n", "We thank all the reviewers for giving valuable feedback to this paper. We have revised the manuscript to incorporate the suggestions from the comments.\n\nWe highlight the following revisions:\n- We have provided additional discussion and references to recent works presenting empirical evidence consistent with our assumption for Theorem 1.\n- We have redone experiments in Fig. 2 with hyperparameter tuning and updated the writing accordingly.\n- We have included a brief discussion on how Theorem 1 relates model complexity to the larger slowdown from staleness observed in our experiments. \n- We have included reference to [1] which uses the sufficient direction assumption that shares the resemblance to our Definition 1 but differs in certain key aspects. \n- We have made further clarifications throughout the manuscript based on reviewers’ comments. \n- We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14) and see how staleness impacts the convergence.\n\n[1] Huo and et al. Training Neural Networks Using Features Replay. To appear in NIPS 2018.", "We appreciate the insightful comments and careful review of our work. Our goals in this work is threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a new convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\n\nRegarding to fixed hyperparameters: we have redone all experiments in Fig. 2 with hyperparameter search over the learning rate. We observe the same overall pattern as before: staleness slows down convergence, sometimes quite significantly at high levels of staleness. Furthermore, different algorithms have different sensitivity to staleness, and show similar trends as observed before. For example, SGD with Momentum remains highly sensitive to staleness. Notably, with the learning rate tuning, RMSProp no longer diverges, but is actually more robust to staleness than Adam and SGD with Momentum. We have updated manuscript to reflect this new observation. While detailed study of hyperparameter settings is beyond the scope of our work, we will open source our code upon acceptance to make the future reproducibility efforts easier and facilitate the use of simulation study alongside distributed experiments.\n\nLSTM is indeed an interesting piece to add. We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14), and have observed that (1) staleness impacts deeper network variants more than shallower counterparts, which is consistent with our observation in CNNs and DNNs; (2) different algorithms respond to staleness differently, with SGD and Adam more robust to staleness than Momentum and RMSProp.\n\nWe thank the reviewer for the careful review of our theoretical contributions. We especially appreciate the helpful comments that draw the connection between the low gradient coherence at the early phase of optimization and the annealing of the number of workers. Indeed, the convergence analysis of [1] requires the number of parallel workers to follow a \\sqrt{K} schedule, where K is the number of iterations. Our work addresses the convergence of non-convex, non-synchronous optimization from a very different starting point than [1] by using gradient coherence, and it seems that similar challenges remains at the initial phase of optimization. We have included a discussion of this connection in the revised manuscript. \n\n[1] Xiangru Lian and et al. Asynchronous parallel stochastic gradient for nonconvex optimization. In NIPS, 2015.", "We thank the reviewer for the valuable feedback. Our work aims to strike a balance between empirical and theoretical approaches to understanding the effects of stale updates. Our goals in this work is threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a new convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\n\n1. The reviewer indeed raised interesting points. Our theory based on gradient coherence relates model complexity to the larger slowdown by staleness through the gradient coherence. Fig. 5 in the manuscript shows that deeper network generally exhibits lower gradient coherence. Our theorem shows that lower gradient coherence amplifies the effect of staleness s through the factor s/mu^2 in Eq (1) in the manuscript. We have included a brief discussion of this point in the manuscript. \n\n2. Staleness is known to add implicit momentum to SGD gradients [2]. The Adam optimizer keeps an exponentially decaying average of past gradients to modify gradient direction, and can be viewed as a version of momentum methods, whose momentum may be affected by staleness by similar reasoning. It is, however, challenging to analyze the convergence of these advanced gradient descent methods even under sequential settings [3], and the treatment under staleness is beyond the scope of our current work. It’d be an interesting future direction to create a delay tolerant version of Adam, similar to AdaRevision [4]. \n\n3. We thank the reviewer for pointing out a reference that we were not aware of. We agree that the sufficient direction assumption in [1] shares resemblance to our Definition 1. We note that their ``staleness’’ in the definition of sufficient direction is based on a layer-wise and fixed delay, whereas our staleness is a random variable that is subject to system level factors such as communication bandwidth. Also, we note that their convergence results in Theorem 1 and Theorem 2 do not capture the impact of staleness, whereas our Theorem 1 explicitly characterizes its impact on the choice of stepsize and the convergence rate, and also captures the interplay to gradient coherence. We have included the reference in our updated manuscript to provide further context.\n\n4. Though we use a peer to peer topology in our experiment, our delay pattern is agnostic to the underlying communication network topology. In practice it is more common to implement an intermediate aggregation such as parameter server [5] to reduce network traffic.\n\n5. We thank the reviewer for pointing out the error. The delay should be r ~ Categorical(0, 1, …, s), which gives the 0.5s + 1 expected delay. We have corrected in the updated manuscript.\n\n6. This is an important point to clarify. With SGD, ResNet8’s final test accuracy is about 73% in our setting, while ResNet20’s final test accuracy is close to 75%. Therefore, deeper ResNet can reach the same model accuracy in the earlier part of the optimization path, resulting in lower number of batches in Fig.1(a). However, when the convergence time is normalized by the non-stale (s=0) value, we observe the impact of staleness is higher on deeper models. We have included this clarification in the updated manuscript. \n\n7. Many synchronous systems uses batch size linear in the number of workers (e.g., [6]). We preserve the same batch size and more workers simply makes more updates in each iteration. We have reworded the footnote for better clarity.\n\n[1] Training Neural Networks Using Features Replay. https://arxiv.org/pdf/1807.04511.pdf\n[2] Ioannis Mitliagkas et al. Asynchrony begets momentum, with an application to deep learning. \n[3] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. International Conference on Learning Representations, 2018.\n[4] H. Brendan Mcmahan and Matthew Streeter. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning. NIPS 2014.\n[5] M. Li, D. G. Andersen, J. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the Parameter Server. In Proceedings of OSDI, 2014. \n[6] P. Goyal and et al. ´ A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate, large minibatch SGD: training imagenet in 1 hour,” CoRR, vol. abs/1706.02677, 2017.", "We thank the reviewer for the comments. Our goals in this work are threefold: (1) Through systematic experiments, we explicitly observe staleness and its impact, for the first time to our knowledge, on 12 key models and algorithms. (2) We introduce gradient coherence (GC), which is related to the impact of staleness for gradient-based optimization. GC can be evaluated in real time during the course of convergence, with minimal overhead, and may be used by practitioners to control delays in the system. (3) Based on GC, we provide a convergence analysis of SGD in non-convex optimization under staleness. With such a broad scope, there is inevitably areas for improvements. We hope that the reviewer will consider the contributions towards making distributed ML more robust under non-synchronous execution as we address the comments:\n\nRegarding the reviewer’s first comment, we would like to clarify that our Definition 1 *does not* require all the gradients to point to close directions along the optimization path. Instead, it only requires the gradients to be positively correlated over a small number of iterations s, which is often very small (e.g. <10 in our experiments). Therefore, Definition 1 is not a global requirement on optimization path. We have clarified this Definition 1 in the revision.\n\nWe want to point out that our own results and a number of recent studies show strong evidences that SGD in practical neural network training encourage positive gradient coherence, e.g., Fig. 4(a)(b), and Fig. 5 in our manuscript, [1] and [3], etc. In particular, [1] shows that the optimization trajectories of SGD and Adam are generally smooth, which is also observed in [3] (e.g., Fig. 4 in [3]). These findings suggest that the direction of the optimization trajectory changes slowly during convergence and therefore justifies our Definition 1, even if the gradient direction may oscillate globally [3]. Such findings are perhaps not surprising, because the loss surface of shallow networks and deep networks with skip connections are dominated by large, flat, nearly convex attractors around the critical points [1][2]. This indicates that the degree of non-convexity is mild around critical points. With small batch sizes (32) and skip connections for deep networks in our experiments, our observation of gradient coherence is therefore consistent with the experimental evidence in existing literature. \n\nRegarding the reference (Choromanska et al. 2014) mentioned by the reviewer, even though it shows the (layer-wise) structure of critical points in simple networks with one hidden layer, the more recent works, including those highlighted above, have revealed additional curvature information around critical points and the optimization dynamics for many complex networks. We therefore sincerely ask the reviewer to reevaluate our work in light of these empirical evidence that are consistent with our findings. As pointed out by Reviewer 3, a similar assumption has been made in [4]. We have included these references and discussion in our latest revision.\n\nRegarding to fixed hyperparameters: we have redone all experiments in Fig. 2 with hyperparameter search over the learning rate. We observe the same overall pattern as before: staleness slows down convergence, sometimes quite significantly at high levels of staleness. Furthermore, different algorithms have different sensitivity to staleness, and show similar trends as observed before. For example, SGD with Momentum remains highly sensitive to staleness. Notably, with the learning rate tuning, RMSProp no longer diverges, but is actually more robust to staleness than Adam and SGD with Momentum. We have updated manuscript to reflect this new observation. \n\nFinally, we fully understand the reviewer’s concern about reproducibility. We believe that our simulation work provides a well-controlled environment for future research of distributed machine learning systems. To make the future reproducibility efforts easier and facilitate the use of simulation study alongside distributed experiments, we will open source our code upon acceptance. \n\n\n[1] Li et al. Visualizing the loss landscape of neural nets. To appear in NIPS 2018\n[2] Nitish Shirish Keskar et al. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\n[3] Eliana Lorch. Visualizing deep network training trajectories with pca. In ICML Workshop on Visualization for Deep Learning, 2016.\n[4] Huo and et al. Training Neural Networks Using Features Replay. To appear in NIPS 2018.", "The papers addresses the important issue with asynchronous SGD: stale gradients.\n\nConvergence is proven under an assumption on the path followed by the optimization walk. Namely, gradient are assumed to be all pointing to the close directions along the walk. My major concern is that this is a strong (if not completely wrong) hypothesis in the practical case of deep learning, with high dimensional models and totally non-convex loss functions (see e.g. \nChoromanska et al. 2014).\n\nThe paper illustrates empirically the convergence claims, but only under fixed hyper-parameters, which completely illustrates the recent concerns about the reproducibility crisis in ML.", "This paper presents and empirical and theoretical study of the convergence of asynchronous stochastic gradient descent training if there are delays due to the asynchronous part of it. The paper can be neatly split in two parts: a simulation study and a theoretical analysis.\n\nThe simulation study compares, under fixed hyperparameters, the behavior of distributed training under different simulated levels of delay on different problems and different model architectures. Overall the results are very interesting, but the simulation could have been more thorough. Specifically, the same hyperparameter values were used across batch sizes and across different values of the distributed delay. Some algorithms failed to converge under some settings and others experienced dramatic slowdowns, but without careful study of hyperparameters it's hard to tell whether these behaviors are normal or outliers. Also it would have been interesting to see a recurrent architecture there, as I've heard much anecdotal evidence about the robustness of RNNs and LSTMs to asynchronous training. I strongly advise the authors to redo the experiments with some hyperparameter tuning for different levels of staleness to make these results more believable.\n\nThe theoretical analysis identifies a quantity called gradient coherence and proves that a learning rate based on the coherence can lead to an optimal convergence rate even under asynchronous training. The proof is correct (I checked the major steps but not all details), and it's sufficiently different from the analysis of hogwild algorithms to be of independent interest. The paper also shows the empirical behavior of the gradient coherence statistic during model training; interestingly this seems to also explain the heuristic commonly believed that to make asynchronous training work one needs to slowly anneal the number of workers (coherence is much worse in the earlier than later phases of training). This quantity is interesting also because it's somewhat independent of the variance of the stochastic gradient across minibatches (it's the time variance, in a way), and further analysis might also show interesting results." ]
[ -1, 7, -1, -1, -1, -1, 4, 9 ]
[ -1, 5, -1, -1, -1, -1, 5, 4 ]
[ "SJxI4taJ2X", "iclr_2019_BylQV305YQ", "iclr_2019_BylQV305YQ", "SJxI4taJ2X", "rJlVH62mo7", "B1e0R75T2m", "iclr_2019_BylQV305YQ", "iclr_2019_BylQV305YQ" ]
iclr_2019_ByldlhAqYQ
Transfer Learning for Sequences via Learning to Collocate
Transfer learning aims to solve the data sparsity for a specific domain by applying information of another domain. Given a sequence (e.g. a natural language sentence), the transfer learning, usually enabled by recurrent neural network (RNN), represent the sequential information transfer. RNN uses a chain of repeating cells to model the sequence data. However, previous studies of neural network based transfer learning simply transfer the information across the whole layers, which are unfeasible for seq2seq and sequence labeling. Meanwhile, such layer-wise transfer learning mechanisms also lose the fine-grained cell-level information from the source domain. In this paper, we proposed the aligned recurrent transfer, ART, to achieve cell-level information transfer. ART is in a recurrent manner that different cells share the same parameters. Besides transferring the corresponding information at the same position, ART transfers information from all collocated words in the source domain. This strategy enables ART to capture the word collocation across domains in a more flexible way. We conducted extensive experiments on both sequence labeling tasks (POS tagging, NER) and sentence classification (sentiment analysis). ART outperforms the state-of-the-arts over all experiments.
accepted-poster-papers
This paper presents a method for transferring source information via the hidden states of recurrent networks. The transfer happens via an attention mechanism that operates between the target and the source. Results on two tasks are strong. I found this paper similar in spirit to Hypernetworks (David Ha, Andrew Dai, Quoc V Le, ICLR 2016) since there too there is a dynamic weight generation for network given another network, although this method did not use an attention mechanism. However, reviewers thought that there is merit in this paper (albeit pointed the authors to other related work) and the empirical results are solid.
train
[ "rJgs-aD6Am", "S1g0uXesAX", "r1xwngaY3Q", "SklbRJ0_37", "ByeMKYAtCX", "rkxdELsKRm", "H1ltRziK0X", "rJeHRZoYC7", "Hye7v16q3m" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "For your detailed writing advices, we have rewritten the two sentences accordingly.\n\n1.\tWe rewrote the sentence \n“ART discriminates between information of the corresponding position and that of all positions with collocated words.” \nto \n“For each word in the target domain, ART learns to incorporate two types of information from the source domain: (a) the hidden state corresponding to the same word, and (b) the hidden states for all words in the sequence.”\n\n2.\tWe rewrote the sentence \n“By using the attention mechanism (Bahdanau et al., 2015), we compute the correlation for each word pair.” \nto \n“ART learns to incorporate information (b) based on the attention scores (Bahdanau et al., 2015) of all words from the source domain.”\n\nFor more writing improvements, please refer to the previous comment or the paper.", "Thank you for your encouraging comments.\nWe agree that there is room for writing of the original submission. We have been improving the writing quality. We believe that the latest version is much clearer now.\n\nWe made the following revisions to improve the writing:\n1. We gave more descriptions of how ART works.\ni. [Learn to Collocate and Transfer] In section 1, we rewrote paragraph of “learn to collocate and transfer”. We highlighted how ART incorporates two types of information and uses the attention mechanism to capture the long-term cross-domain dependency.\nii. [Architecture] In section 2, we added a paragraph to describe the architecture of ART. We elaborated how it incorporates the information of the source domain from the pre-trained model.\niii. [Model training] In section 2, we rewrote the paragraph of model training. We highlighted the model pre-training procedure and fine-tuning procedure of ART.\n2. We added the interpretations and examples for some confusing notions, such as “level-wise transfer learning”, “cell-level transfer learning”, and “collocate”.\n3. We abandoned or reduced some vague words or phrases, such as “word correlation”, “collocate”. The revised version uses more precise expressions, such as “dependencies between two words”, “incorporate information by their attention score”.\n4. We rewrote the related work section. We compared ART with BERT and ELMo. The latter two approaches also use pre-trained models for downstream tasks.\n5. We fixed some typos.", "The proposed method is suitable for many NLP tasks, since it can handle the sequence data.\n\nI find it difficult to follow through the model descriptions. Perhaps a more descriptive figures would make this easier to follow, I feel that the ART model is a very strait forward and it can be easily described in much simpler and less exhausting (sorry for the strong word) way, while there is nothing wrong with being as elaborating as you are, I feel that all those details belong in an appendix. \nCan you please explain the exact learning process?\nI didn’t fully understand the exact way of collocations, you first train on the source domain and then use the trained source network when training in the target domain with all the collocated words for each training example? I deeply encourage you to improve the model section for future readers. \nIn contrast to the model section, the related work and the experimental settings sections are very thin.\nThe experimental setup for the sentiment analysis experiments is quite rare in the transfer learning/domain adaptation landscape, having equal amount of labeled data from both source and target domains is not very realistic in my humble opinion.\nMore realistic setup is unsupervised domain adaptation (like in DANN and MSDA-DAN papers) or minimally supervised domain adaptation (like you did in your POS and NER experiments).\n\nIn addition to the LSTM baseline (which is trained with target data only), I think that LSTM which is trained on both source and target domains data is required for truly understand ART gains – this goes for the POS and NER tasks as well.\nThe POS and NER experiments can use some additional baselines for further comparison, for example:\nhttp://www.aclweb.org/anthology/Q14-1002\nhttps://hornhehhf.github.io/hangfenghe/papers/14484-66685-1-PB.pdf\n\nI am not sure I understand the “cell level transfer” claim, did you mean that you are the first to apply inner LSTM/RNN cell transfer or that you are the first ones to apply word-level fine grained transfer, the latter has already been done:\nhttps://arxiv.org/pdf/1802.05365.pdf\nhttps://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=4531&context=sis_research\nhttp://www.aclweb.org/anthology/N18-1112\nhttps://openreview.net/pdf?id=rk9eAFcxg\n", "This paper presents the following approach to domain adaptation. Train a source domain RNN. While doing inference on the target domain, first you run the source domain RNN on the sequence. Then while running the target domain RNN, set the hidden state at time step i, h^t_i, to be a function 'f' of h^t_{i-1} and information from source domain \\psi_i; \\psi_i is computed as a convex combination of the state of the source domain RNN, h^s_{i}, and an attention-weighted average of all the states h^s{1...n}. So in effect, the paper transfers information from each of source domain cells -- the cell at time step i and all the \"collocated\" cells (collocation being defined in terms of attention). This idea is then extended in a straightforward way to LSTMs as well. \n \nDoing \"cell-level\" transfer enables more information to be transferred according to the authors, but it comes at a higher computation since we need to do O(n^2) computations for each cell.\n\nThe authors show that this beats a variety of baselines for classification tasks (sentiment), and for sequence tagging task (POS tagging over twitter.)\n\nPros:\n1. The idea makes sense and the experimental results show solid \n\nCons:\n1. Some questions around generalization are not clearly answered. E.g. how are the transfer parameters of function 'f' (that controls how much source information is transferred to target) trained? If the function 'f' and the target RNN is trained on target data, why does 'f' not overfit to only selecting information from the target domain? Would something like dropping information from target domain help?\n\n2. Why not also compare with a simple algorithm of transferring parameters from source to target domain? Another simple baseline is to just train final prediction function (softmax or sigmoid) on the concatenated source and target hidden states. Why are these not compared with? Also, including the performance of simple baselines like word2vec/bow is always a good idea, especially on the sentiment data which is very commonly used and widely cited. \n\n3. Experiments: the authors cite the hierarchical attention transfer work of Li et al (https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16873/16149) and claim their approach is better, but do not compare with them in the experiments. Why?\n\nWriting:\nThe writing is quite confusing at places and is the biggest problem with this paper. E.g.\n\n1. The authors use the word \"collocated\" everywhere, but it is not clear at all what they mean. This makes the introduction quite confusing to understand. I assumed it to mean words in the target sentences that are strongly attended to. Is this correct? However, on page 4, they claim \"The model needs to be evaluated O(n^2) times for each sentence pair.\" -- what is meant by sentence pair here? It almost leads me to think that they consider all source sentence and target sentences? This is quite confusing. \n\n2. The authors keep claiming that \"layer-wise transfer learning mechanisms lose the fine-grained cell-level information from the source domain\", but it is not clear exactly what do they mean by layer-wise here. Do they mean transferring the information from source cell i to target cell i as it is? In the experiments section on LWT, the authors claim that \"More specifically, only the last cell of the RNN layer transfers information. This cell works as in ART. LWT only works for sentence classification.\" Why is it not possible to train a softmax over both the source hidden state and the target hidden state for POS tagging? \n\nnits:\npage 4 line 1: \"i'th cell in the source domain\" -> \"i'th cell in the target domain\". \"j'th cell in target\" -> \"j'th cell in sourcE\".\n\n\nRevised: increased score after author response.\n", "Thanks for providing the latest set of results. Your experimental results are quite solid and so I am improving my score. However not giving it very high scores because I still feel a little hesitant about the writing quality in this paper. The technical writing is still subpar. E.g.\n1) \"ART discriminates between information of the corresponding position and that of all positions with collocated words.\" => you probably want to say \"ART incorporates the hidden state representation corresponding to the same position and a function of the hidden states for all other words weighted by their attention scores\"\n2) \"By using the attention mechanism (Bahdanau et al., 2015), we compute the correlation for each word pair\" => correlation has a very specific meaning and it makes it confusing if you use here.\n\nThere are several such examples.", "Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines according to your comments. The results further justify the effectiveness of ART. (2) We added a new experiment for minimally supervised domain adaptation in Table 3. ART still outperforms all the competitors by a large margin. (3) We clarified the ART model and model training process in the revised paper. We will give more details below:\n\n== Writing ==\n1. High level description of the ART model. \nWe have added the following description of ART model in section 2.\n“The source domain and the target domain share an RNN layer, from which the common information is transferred. We pre-train the neural network of the source domain. Therefore the shared RNN layer represents the semantics of the source domain. The target domain has an additional RNN layer. Each cell in it accepts transferred information through the shared RNN layer. Such information consists of (1) the information of the same word in the source domain (the red edge in figure 2); and (2) the information of all its collocated words (the blue edges in figure 2). ART uses attention to decide the weights of all candidate collocations. The RNN cell controls the weights between (1) and (2) by an update gate.”\n\n2. Model training. We add more details of the model training part in section 2.\nWe first pre-train the parameters of the source domain by its training samples. Then we fine-tune the pre-trained model with additional layers of the target domain. The fine-tuning uses the training samples of the target domain. All parameters are jointly fine-tuned.\n\n3. Related work.\nWe have rewritten the related work section. We compare with other cell-level transfer learning approaches and pre-trained models.\n\n== Innovation of cell-level transfer ==\nWe agree that some previous transfer learning approaches also consider cell-level transfer. But none of them considers the word collocations. As a pre-trained model, ELMo uses bidirectional LSTMs to generate contextual features. Instead, ART uses attention mechanism in RNN that each cell in the target domain directly access information of all cells in the source domain. We added more details in the related work section.\n\n== Baselines ==\nWe added two baselines, LSTM-u and FLORS, according to your comments. LSTM-u uses a standard LSTM and is trained by the union data of the source and the target domain. FLORS is a domain adaptation model for POS tagging (http://www.aclweb.org/anthology/Q14-1002). Their results are shown in Table 2 and Table 5. ART outperforms LSTM-u in almost all settings by a large margin. Note that FLORS is independent of the target domain. If the training corpus of the target domain is quite rare (Twitter/0.01), FLORS performs better. But with richer training data of the target domain (Twitter/0.1), ART outperforms FLORS by a large margin.\n\nTable 2: Classification accuracy on the Amazon review dataset.\nSource\t\tTarget\t\tLSTM-u\tART\nBooks\t\tDVD\t\t0.770 \t0.870 \nBooks\t\tElectronics\t0.805 \t0.848 \nBooks\t\tKitchen\t\t0.845 \t0.863 \nDVD\t\tBooks\t\t0.788 \t0.855 \nDVD\t\tElectronics\t0.788 \t0.845 \nDVD\t\tKitchen\t\t0.823 \t0.853 \nElectronics\tBooks\t\t0.740 \t0.868 \nElectronics\tDVD\t\t0.753 \t0.855 \nElectronics\tKitchen\t\t0.863 \t0.890 \nKitchen\t\tBooks\t\t0.760 \t0.845 \nKitchen\t\tDVD\t\t0.758 \t0.858 \nKitchen\t\tElectronics\t0.815 \t0.853 \n Average\t\t\t\t0.792 \t0.858\n\nTable 5: Performance over POS tagging.\nTask\t\t\tSource\tTarget\t\tFLORS\tART\nPOS Tagging\t PTB\t\tTwitter/0.1\t0.763\t0.859\nPOS Tagging\t PTB\t\tTwitter/0.01\t0.763\t0.658\n\n== Experimental settings ==\nBased on your comment, we added a new experiment for minimally supervised domain adaptation in sentence classification. For each target domain in the Amazon review dataset, we combined the training/development data of rest three domains as the source domain. We show the results in Table 3. ART outperforms the competitors by a large margin. This verifies its effectiveness in the setting of minimally supervised domain adaptation.\n\nTable 3: Classification accuracy with scarce training samples of the target domain.\nTarget\t\tLSTM\tLSTM-u\tCCT\t\tLWT\tHATN\tART\nBooks\t\t0.745 \t0.813 \t0.848 \t0.808 \t0.820 \t0.895 \nDVD\t\t0.695 \t0.748 \t0.870 \t0.770 \t0.828 \t0.875 \nElectronics\t0.733 \t0.823 \t0.848 \t0.818 \t0.863 \t0.865 \nKitchen\t\t0.798 \t0.840 \t0.860 \t0.840 \t0.833 \t0.870 \nAverage\t\t0.743 \t0.806 \t0.856 \t0.809 \t0.836 \t0.876", "Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines according to your comments. The results further justify the effectiveness of ART. (2) We clarified “collocate”, “layer-wise transfer learning”, “model training”, and their related issues. We give more details below:\n\n1. Regarding computational cost:\nThe network depth only increases by 2 if we ignore the detailed operations (e.g. gates). One is caused by collocating and transferring. Another one is caused by merging the original input, the previous cell’s hidden state, and the transferred information. So the time cost does not increase much.\n\n2. Regarding Con1: why does 'f' not overfit to only selecting information from the target domain? \nYour understanding is correct. Function 'f' will overfit to the target domain. All parameters will be jointly fine-tuned by the training samples of the target domain. Nevertheless, the pre-training for the source domain still helps because it provides representations of the source domain. Another recent successful example of using pre-trained models is BERT (Devlin et al., 2018), which also fine-tunes all the parameters to specific tasks. \nAnd we rewrite the model training part in section 2 to make it clearer.\n“We first pre-train the parameters of the source domain by its training samples. Then we fine-tune the pre-trained model with additional layers of the target domain. The fine-tuning uses the training samples of the target domain. All parameters are jointly fine-tuned.”\n\nRegarding Con2. More simple baselines.\nFirst, we added a baseline model, LSTM-s, which directly uses parameters from the source domain to the target domain. The results are shown in Table 2. ART outperforms the baseline by a large margin.\n\nTable 2: Classification accuracy on the Amazon review dataset.\nSource\t\tTarget\t\tLSTM-s\tHATN\tART\nBooks\t\tDVD\t\t0.718\t0.813\t0.870\nBooks\t\tElectronics\t0.678\t0.790\t0.848\nBooks\t\tKitchen\t\t0.678\t0.738\t0.863\nDVD\t\tBooks\t\t0.730\t0.798\t0.855\nDVD\t\tElectronics\t0.663\t0.805\t0.845\nDVD\t\tKitchen\t\t0.708\t0.765\t0.853\nElectronics\tBooks\t\t0.648\t0.763\t0.868\nElectronics\tDVD\t\t0.648\t0.788\t0.855\nElectronics\tKitchen\t\t0.785\t0.808\t0.890\nKitchen\t\tBooks\t\t0.653\t0.740\t0.845\nKitchen\t\tDVD\t\t0.678\t0.738\t0.858\nKitchen\t\tElectronics\t0.758\t0.850\t0.853\n Average\t\t\t\t0.695\t0.783\t0.858\n\nSecond, you suggest directly concatenating the hidden states of the source and the target domains. In fact, we already proposed a very similar baseline CCT. The only difference is that CCT uses a gate to merge the two values, instead of concatenation. ART outperforms CCT in all cases.\nThird, we already used 100d GloVe vectors to initialize ART and all its ablations we proposed in this paper. The pre-trained word embeddings are also widely used by its competitors (e.g. AMN and HATN). We have added the description in section 4.\n\n\nRegarding Con 3. Experiments: the hierarchical attention transfer work of Li et al.\nWe added the comparison with HATN (Li et al 2018). The results are shown in Table 2 and Table 3. We use the source code and hyper parameters of (Li et al 2018) from the authors’ Github. We changed its labeled training samples from 5600 to 1400 as with ART.\n\nThe results are shown in Table 2 above. ART still beats the baseline by a large margin. This verifies its effectiveness.\n\n== Writing ==\nRegarding Writing1.\nFirst, for the meaning of “collocate”, we added more explanations and take figure 1 as an example in section 1. \n“Here “collocate” indicates that a word's semantics can have long-term dependency on other words. To understand a word in the target domain, we need to precisely represent its collocated words from the source domain. We learn from the collocated words via the attention mechanism. For example, in figure 1, “hate” is modified by the adverb “sometimes”, which implies the act of hating is not serious. But the “sometimes” in the target domain is trained insufficiently. We need to transfer the semantics of “sometimes” in the source domain to understand the implication.”\nSecond, to avoid the ambiguity of “sentence pair”, we rewrote the description in the revised version.\n“The model needs to be evaluated O(n^2) times for each sentence, due to the enumeration of n indexes for the source domain and n indexes for the target domain. Here n denotes the sentence length.”\n\nRegarding Writing2.\n“Layer-wise transfer learning” indicates that the approach represents the whole sentence by a single vector. So the transfer mechanism is only applied to the vector. We cannot apply layer-wise transfer learning algorithms to sequence labeling tasks.\nWe added the descriptions in section 1. ", "Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines based on your comments. The results further justified the effectiveness of ART. (2) We added the clarification of “layer-wise transfer learning”, “cell-level transfer learning”, and “collocate” in section 1. We will give more details below:\n\n==Experiments==\nWe added two baselines, LSTM-u and HATN, according to your comments. LSTM-u uses a standard LSTM and is trained by the union data of the source domain and the target domain. The HATN model is from the paper \"Hierarchical Attention Transfer Network for Cross-domain Sentiment Classification\" (Li et al 2018). We use the source code and hyper parameters of (Li et al 2018) from the authors’ Github. We changed its labeled training samples from 5600 to 1400 as with ART.\n\nThe results are shown in Table 2. ART still beats the baselines by a large margin. This verifies its effectiveness.\n\nTable 2: Classification accuracy on the Amazon review dataset.\nSource\t\tTarget\t\tLSTM-u\tHATN\tART\nBooks\t\tDVD\t\t0.770\t0.813\t0.870\nBooks\t\tElectronics\t0.805\t0.790\t0.848\nBooks\t \tKitchen\t\t0.845\t0.738\t0.863\nDVD \t Books\t\t0.788\t0.798\t0.855\nDVD\t Electronics\t0.788\t0.805\t0.845\nDVD\t Kitchen\t\t0.823\t0.765\t0.853\nElectronics \tBooks\t\t0.740\t0.763\t0.868\nElectronics\tDVD\t\t0.753\t0.788\t0.855\nElectronics\tKitchen\t\t0.863\t0.808\t0.890\nKitchen\t\tBooks\t\t0.760\t0.740\t0.845\nKitchen\t\tDVD\t\t0.758\t0.738\t0.858\nKitchen\t\tElectronics\t0.815\t0.850\t0.853\n Average\t\t\t0.792\t0.783\t0.858\n\n\n== Writing==\nWe added more detailed explanations and took figure 1 as an example to clarify the confusing parts in section 1.\n\n1. Layer-wise transfer learning: \n“Layer-wise transfer learning” indicates that the approach represents the whole sentence by a single vector. So the transfer mechanism is only applied to the vector. \n\n2. Cell-level transfer learning:\nART uses cell-level information transfer, which means each cell is affected by the transferred information. For example, in figure 1, the state of “hate” in the target domain is affected by “sometimes” and ”hate” in the source domain. \n\n3. Collocate: \nWe use the term “collocate” to indicate that a word's semantics can have long-term dependency on another word. To understand a word in the target domain, we need to precisely capture and represent its collocated words from the source domain. We learn from the collocated words via the attention mechanism. For example, in figure 1, “hate” is modified by the adverb “sometimes”, which implies the act of hating is not serious. But “sometimes” in the target domain is trained insufficiently. We need to transfer the semantics of “sometimes”.", "== Quality of results ==\nThis paper's empirical results are its main strength. They evaluate on a well-known benchmark for transfer learning in text classification (the Amazon reviews dataset of Blitzer et al 2007), and improve by a significant margin over recent state-of-the-art methods. They also evaluate on several sequence tagging tasks and achieve good results.\n\nOne weakness of the empirical results is that they do not compare against training a model on the union of the source and target domain. I think this is very important to compare against.\n\nNote: the authors cite a paper in the introduction \"Hierarchical Attention Transfer Network for Cross-domain Sentiment\nClassification\" (Li et al 2018) which also achieves state of the art results on the Amazon reviews dataset, but do not compare against it. At first glance, Li et al 2018 appear to get better results. However, they appear to be training on a larger amount of data for each domain (5600 examples, rather than 1400). It is unclear to me why their evaluation setup is different, but some clarification about this would be helpful.\n\n== Originality ==\nA high level description of their approach:\n1. Train an RNN encoder (\"source domain encoder\") on the source domain\n2. On the target domain, encode text using the following strategy:\n - First, encode the text using the source domain encoder\n - Then, encode the text using a new encoder (a \"target domain encoder\") which has the ability to attend over the hidden states of the source domain encoder at each time step of encoding.\n\nThey also structure the target domain encoder such that at each time step, it has a bias toward attending to the hidden state in the source encoder at the same position.\n\nThis has a similar flavor to greedy layer-wise training and model stacking approaches. In that regard, the idea is not brand new, but feels well-applied in this setting.\n\n== Clarity ==\nI felt that the paper could have been written more clearly. The authors set up a comparison between \"transfer information across the whole layers\" vs \"transfer information from each cell\" in both the abstract and the intro, but it was unclear what this distinction was referring to until I reached Section 4.1 and saw the definition of Layer-Wise Transfer.\n\nThroughout the abstract and intro, it was also unclear what was meant by \"learning to collocate cross domain words\". After reading the full approach, I see now that this simply refers to the attention mechanism which attends over the hidden states of the source domain encoder.\n\n== Summary ==\nThis paper has good empirical results, but I would really like to see a comparison against training a model on the union of the source and target domain. I think superior results against that baseline would increase my rating for this paper.\n\nI think the paper's main weakness is that the abstract and intro are written in a way that is somewhat confusing, due to the use of unconventional terminology that could be replaced with simpler terms." ]
[ -1, -1, 5, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, 3 ]
[ "ByeMKYAtCX", "ByeMKYAtCX", "iclr_2019_ByldlhAqYQ", "iclr_2019_ByldlhAqYQ", "H1ltRziK0X", "r1xwngaY3Q", "SklbRJ0_37", "Hye7v16q3m", "iclr_2019_ByldlhAqYQ" ]
iclr_2019_ByleB2CcKm
Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure
Clustering methods and latent variable models are often used as tools for pattern mining and discovery of latent structure in time-series data. In this work, we consider the problem of learning procedural abstractions from possibly high-dimensional observational sequences, such as video demonstrations. Given a dataset of time-series, the goal is to identify the latent sequence of steps common to them and label each time-series with the temporal extent of these procedural steps. We introduce a hierarchical Bayesian model called Prism that models the realization of a common procedure across multiple time-series, and can recover procedural abstractions with supervision. We also bring to light two characteristics ignored by traditional evaluation criteria when evaluating latent temporal labelings (temporal clusterings) -- segment structure, and repeated structure -- and develop new metrics tailored to their evaluation. We demonstrate that our metrics improve interpretability and ease of analysis for evaluation on benchmark time-series datasets. Results on benchmark and video datasets indicate that Prism outperforms standard sequence models as well as state-of-the-art techniques in identifying procedural abstractions.
accepted-poster-papers
While the reviews of this paper were somewhat mixed (7,6,4), I ended up favoring acceptance because of the thorough author responses, and the novelty of what is being examined. The reviewer with a score of 4, argues that this work is not a good fit for iclr, but, although tailoring new metrics may not be a common area that is explored, I don't believe that it's outside the range of iclr's interest, and therefore also more unique.
test
[ "ryxNNJIs2m", "rkgO_O9zxE", "HyxQ798507", "Ske7htL5AX", "B1x2lF85Rm", "SyxHYuUcAQ", "HygJ8OLcCQ", "ryxCFI8907", "Hyx08Tcp2Q", "BkgfMLD42X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This is a hybrid paper, making contributions on two related fronts:\n1. the paper proposes a performance metric for sequence labeling, capturing salient qualities missed by other metrics, and\n2. the paper also proposes a new sequence labeling method based on inference in a hierarchical Bayesian model, focused on simultaneously labeling multiple sequences that have the same underlying procedure but with varying segment lengths.\n\n\nThis paper is not a great topic fit for ICLR: it's primarily about a hand-designed performance metric for sequence labeling and a hierarchical Bayesian model with Gaussian observations and fit with Gibbs sampling in a full-batch setting. The ICLR 2019 reviewer guidelines suggest \"Ask yourself: will a substantial fraction of ICLR attendees be interested in reading this paper?\" and based on my understanding of the ICLR audience I suspect not. Based on looking at past ICLR proceedings, this paper's topic and collection of techniques is not in the ICLR mainstream (though it's not totally unrelated). The authors could convince me that I'm mistaken by pointing out closely related ICLR papers (e.g. with a similar mix of techniques in their methods, or similarly proposing a hand-designed performance metric); as far as I can tell, none of the papers cited in the references are from ICLR, but rather from e.g. NIPS, AISTATS, and IEEE TPAMI, which I believe would be better fits for this kind of work.\n\nOne way to make this work more relevant to the ICLR audience would be to add feature learning (especially based on neural network architectures). That might also entail additional technical contributions, like how to fit models like these in the minibatch setting (where the current Gibbs sampling method might not apply).\n\n\nOn the proposed performance metric, the discussion of existing metrics as they apply to the example in Fig 3 was really helpful. (I assume, but didn't check, that the authors' characterization of the published performance metrics is accurate, e.g. \"no traditional clustering criteria can distinguish C_2 from C_3\".) The proposed metric seems to help.\n\nBut it's a bit complicated, with several free design decisions involved (e.g. choosing the scoring function \\mathcal{H} in Sec 3.1, the choice of conditional entropy H in Sec 3.2, the choice of \\beta in Sec 3.3, the choice of the specific algebraic forms of RSS, LASS, SSS, and TSS). Certainly the proposed metrics incorporate the kind of information that the authors argue can be important, but the design details of how that information is summarized into a single number aren't really explored or weighed against alternative designs choices. \n\nIf a primary aim of this paper is to propose a new performance metric, and presumably to have it catch on with the rest of the field, then the contribution would be much greater if the design space was clearly articulated, alternatives were considered, and multiple proposals were validated. Validation could be done with human labelers ranking the intuitive 'goodness' of labeling results (and then compared to rankings derived from the proposed performance metrics), and with comparing how the metrics correlate with performance on various downstream tasks.\n\nAnother idea is to take advantage of a better segmentation performance metric and use it to automatically tune the hyperparameters of the sequence labeling methods considered in the experiments section. (IIUC hyperparameters were set by hand in the experiments.). That would make for more interesting experiments that give a more comprehensive summary of how these techniques can compare.\n\nHowever, as it stands, while the performance metric itself may have merit, in this paper it is not sufficiently well validated or compared to alternatives.\n\n\nOn the hierarchical Bayesian model, the current model design andinference algorithm are okay but don't constitute major technical contributions. I was surprised by some model details: for example, in \"Modeling the procedure\" of Sec 4.1, it would be much more satisfying to generate the (p_1, ..., p_s) sequence from an HMM instead of sampling the elements of the sequence independently, dropping any chance to learn transition structure as part of the Bayesian inference procedure. More importantly, it wasn't made clear if 'self-transitions' where p_s = p_{s+1} were ruled out, though such transitions might confuse the model's semantics. As another example, in \"Modeling the realizations in each time-series\" of Sec 4.1, the procedure based on iid sampling and sorting seems unnatural, and might make inference more complex. Why not just sample the durations directly (rather than indirectly defining them via sorting independently-generated indices)? If there's a good reason, it should probably be discussed (e.g. maybe parameterizing the durations directly would make it easier to express prior distributions over *absolute* segment lengths, but harder to express distributions over *relative* segment lengths?). Finally, the restriction to conditionally iid Gaussian observations was disappointing.\n\nThe experimental results were solid on the task for which the model's extra assumptions paid off, but that's a niche comparison.\n\nOne suggestion on the baseline front: you can tie multiple HMMs to have the same procedure (i.e. the same state sequences not counting repeats) by fixing the number of states to be s (the length of the procedure sequence) and fixing the transition matrices to have an upper-bidiagonal support structure. A similar construction can be used for HSMMs. I think a natural Gibbs sampling procedure would emerge. This approach is probably written down in the HMM literature (it seems every conceivable HMM variant has been studied!) but I don't have a reference for it.\n\n\nOverall, this paper needs more work.\n\n\nMinor suggestions:\n- maybe refer to \"segment structure\" (e.g. in Sec 3), as \"changepoint structure\" (and consider looking into changepoint performance metrics if you haven't already)\n- if you used code from other authors in your baselines, it would be good to cite that code (e.g. GitHub links)", "The author responses to my review were thorough and compelling. The revisions made the paper stronger.\n\nOne of my main complaints about the paper was that it might not be a good subject fit for ICLR. That the other reviewers did not raise the same objection (indeed thought the opposite: \"This work is appropriate for ICLR.\"), and gave positive reviews, leads me to believe I could be wrong about the subject fit. That is, my confidence in my evaluation is now lower.\n\nI still believe the contribution in this manuscript would be much stronger if (1) it contained user studies that showed the proposed metric corresponds to some human perception of the goodness of segmentation or (2) it showed that improvements on the metric correlated with some kind of downstream task performance. Without a compelling demonstration of strengths like these, it seems much less likely that the metric or the proposed method will impact others' future work.\n\nI'll revise my review score up to be on the negative side of neutral, and revise down my confidence. That way I expect my review wouldn't be enough to sink the submission if another reviewer wants to champion it.", "We also discuss additional clarifications to questions raised by you,\n\nChoosing \\mathcal{H} (S3.1): \nWhile the space of alternative choices is enormous, the main desiderata is that the chosen scoring function should look for overlapping sequences of tokens in the two segments, so common substring/subsequence style functions are most suitable. Using the heaviest common substring allows us to take into account the relative length of the matched token sequence. \n\n\nClarification for H (S3.2): \nThis follows from the definition of conditional entropy, as laid out in prior work such as Rosenberg & Hirschberg (2007), Meila (2007) and Dom (2001), who adopt these definitions for the standard clustering setting.\n\n\nAlgebraic forms of criteria: \nThe choice of algebraic form is to represent them as normalized mutual information criteria. The metric that is canonically called NMI is one instance of a family of such criteria. The criteria we derived are part of this family and can be rewritten as a mutual information term divided by some normalization. For instance,\n\nLASS \t= 1 - \\frac{H(A|B) + H(B|A)} {H(A) + H(B)}\n = \\frac{H(A) + H(B) - H(A|B) - H(B|A)} {H(A) + H(B)}\n = \\frac{2 * I(A;B)} {H(A) + H(B)}\n\nThis relates them to the large body of previous work in clustering evaluation using information-theoretic criteria (see Table 2 in Vinh, Epps and Bailey (2010) for a review).\n\n-------------------------------------------------------\nWe would like to conclude by thanking you for the helpful suggestions. We hope that our response addresses the concerns raised by you and that you will reconsider your assessment of our work.\n", "We would like to highlight that we see our new evaluation criteria as our primary contribution, and here we are of course happy to clarify questions about the algorithm we introduced (Prism), which provides a small concrete improvement in trying to model procedural structure. \n\n\nSegment length generation: \nWe have updated our discussion in Section 4 to clarify that the generative process we propose (sample m Categoricals from the prior and sort them) is exactly equivalent to generating segment lengths from a Multinomial distribution (over m draws) with a Dirichlet prior. However, representing the process in the way we have written it improved inference efficiency with Gibbs sampling -- resampling the segment lengths requires only computing the likelihood of data points at segment boundaries, which is independent of the length of the time-series and far more efficient (we also discuss this in the appendix).\n\n\nBaseline suggestion: \nThe baseline suggested is interesting -- however, our concern is that it cannot represent repeated structure since the bidiagonal structure only allows forward transitions. As a key part of our proposed metrics is to be able to capture repeated structure, it would be somewhat impoverished in comparison to our method and the HMM models we compare to. \n\n\nGenerate procedure as HMM: \nThe decision to generate each step in the procedure (p_1, …, p_s) independently was a conscious design choice to improve the model’s ability to recover non-Markov segmentations. Non-Markov processes can be made Markov by expanding the state (to include the history) but this can both increase the amount of data needed to fit a good model (since there are now more states) and make it harder to identify repeated structure. Alternatively, fitting a non-Markov procedure using a Markov process can result in learning a highly stochastic transition model that does not provide a good fit to the data. We explored both of these possibilities and found as expected that they did not do well in the simulations we considered, though we completely agree that we could also use a Markov process to generate the procedure, if it is present. We have included a small simulation study in the appendix that highlights this point.\n\n\nPresence of self-transitions: \nOur model allows self-transitions which can lead to adjacent segments being assigned the same label (effectively causing them to be condensed into a single segment). A version of our model that rules out self-transitions performed similarly, so we have not included that in the paper to avoid confusion, since inference for that model is far more complex and requires the introduction of auxiliary variables in the model.\n\n\nMini-batch learning/neural net observations: \nWhile we expect that these extensions will allow us to scale directly to high-dimensional data and improve performance, our focus was to establish the need for learning procedural abstractions. Procedural tasks are extremely common, and our experiments show the benefit of baking in structural assumptions into the data-generating process. We used the same observational model for all compared methods to disentangle this benefit. Recent related work such as Johnson et al. (2016) (“Composing graphical models with neural networks for structured representations and fast inference”) could provide a way of combining the kind of structured model we have described with neural net observations. Another alternative would be to design a variational autoencoder using the Gumbel-Softmax trick to represent the discrete variables. However, these are non-trivial extensions that require careful thought so we defer them to future work.\n\n\nHyperparameters in experimental evaluation: \nWe have added further experimentation to show Prism’s sensitivity to the number of segments (s) and clusters (K) in Section 6 and Figure 7. We found that Prism’s performance is relatively insensitive to the number of segments as long as it is greater than the number in ground-truth, suggesting one can set it to a large value. Prism’s performance is also stable across a wide range of K. ", "Thank you for your detailed comments! Here’s a response to the concerns that you raised,\n\nResponse on metrics:\n\nIndeed, we do view our primary contribution as providing a new performance metric for better assessing the quality of extracting latent structure in temporal sequences. We appreciate the suggestion to (i) more clearly articulate the design space; (ii) discuss alternative formulations and (iii) validate other proposals. We have updated our paper in Sections 3 & 5 as well as Figure 6 to address this and we briefly summarize this here.\n\nWe began from the stance that something seemed to be missing in existing evaluation criteria that is important to capture in temporal data -- segment and repeated structure. We wanted to draw on the widely used criteria that have been designed for evaluating clusterings (Rosenberg and Hirschberg 2007) and consider them in the temporal setting. In doing so we encountered a multi-objective problem, in how to weigh the new criteria that we designed (RSS and SSS). Our solution to introduce a tradeoff parameter (\\beta) follows the approach laid out in the key paper on (non-temporal) clustering evaluation by Rosenberg and Hirschberg (2007) that introduces the V-measure for clustering evaluation. That paper includes a tradeoff parameter between completeness and homogeneity (the constituent criteria for V-measure) with the harmonic mean (\\beta=1) kept as the default, allowing them to “prioritize one criterion over another, depending on the clustering task and goals.” \n\nBased on your and other reviewers’ excellent feedback to further explore how these settings impact the resulting metric, we have also introduced a new sensitivity analysis for the tradeoff parameter. While determining the right value of \\beta for evaluation is a function of the problem and end-goal (e.g. repeated structure may have no importance in changepoint segmentation, and should be disregarded), we show that \\beta=1 is a problem-agnostic compromise that works well in practice. To this end, our sensitivity analysis answers the following question: suppose \\beta’ =/= 1.0 is the right value of the tradeoff parameter for a particular problem -- how similar does the metric perform (in terms of the ranking of methods) by using \\beta=1.0? We find, quite naturally, that at extreme values of \\beta (0 or \\infty) the methods may be ranked quite differently than by \\beta=1.0. However, a large range of \\beta values can be approximated well by using \\beta=1.0 (Figure 6i in the revised paper). This shows the general robustness of our metric when used to compare methods for different applications.\n\nWe completely agree that it would be interesting to conduct a study in which human judgments are compared to the evaluation criteria to validate them qualitatively. Interestingly, prior work in the area such as Rosenberg and Hirschberg (2007), Meila (2007), Dom (2001) also did not conduct user studies but instead justified their metric through examples and direct reasoning about how it fulfils the desiderata laid out; we followed this in our work, with the additional inclusion of a large-scale comparison of methods on real-world datasets, as well as validation of the tradeoff parameter that we introduced.\n\nWe believe those working on time-series data will benefit from having access to the tailored evaluation criteria we have introduced. These evaluation criteria identify and target specific characteristics of the temporal clustering setting, something that has not been done systematically in the past.", "Hyperparameters in experimental evaluation: \nWe have added further experimentation to show Prism’s sensitivity to the number of segments (s) and clusters (K). We found that Prism’s performance is relatively insensitive to the number of segments as long as it is greater than the number in ground-truth, suggesting one can set it to a large value. Prism’s performance is also stable across a wide range of K.", "Thank you for your feedback! Here’s a response to the concerns that you raised,\n\n\nExisting segmentation criteria: \nWe have included additional descriptive details in the revision in Section 2 & 3 on the relationship between temporal clustering and segmentation. Thank you for pointing us to the paper by Killick, Fearnhead, & Eckley (2011). We have included a discussion of this in the paper in Section 3. Their work uses criteria that are not very similar to those that we have proposed -- e.g. a criteria from their work (which is common in changepoint detection) is to evaluate whether a changepoint occurred close to (within some tolerance interval) one in ground-truth and measure the precision/recall. However, this is (i) sensitive to the tolerance interval, which is problem specific; (ii) an all or nothing metric which cannot distinguish small degradations or changes in the temporal clustering, unlike our approach.\n\n\nReliance on a tradeoff parameter: \nCurrently, the paper studies 3 settings of this \\beta parameter -- 0, 1 and \\infty. Our hope with these settings was to expose the behavior of the constituent metrics in a problem-agnostic way. \n\nOur original goal was to draw on the widely used criteria that have been designed for evaluating clusterings (Rosenberg and Hirschberg 2007) and consider them in the temporal setting. In doing so we encountered a multi-objective problem in how to weigh the new criteria that we designed (RSS and SSS). Our solution to introduce a tradeoff parameter (\\beta) follows the approach laid out in the key paper on (non-temporal) clustering evaluation by Rosenberg and Hirschberg (2007) that introduces the V-measure for clustering evaluation. That paper includes a tradeoff parameter between completeness and homogeneity (the constituent criteria for V-measure) with the harmonic mean (\\beta=1) kept as the default, allowing them to “prioritize one criterion over another, depending on the clustering task and goals.” \n\nBased on your and other reviewers’ excellent feedback to further explore how these settings impact the resulting metric, we have also introduced a new sensitivity analysis for the tradeoff parameter. While determining the right value of \\beta for evaluation is a function of the problem and end-goal (e.g. repeated structure may have no importance in changepoint segmentation, and should be disregarded), we show that \\beta=1 is a problem-agnostic compromise that works well in practice. To this end, our sensitivity analysis answers the following question: suppose \\beta’ =/= 1.0 is the right value of the tradeoff parameter for a particular problem -- how similar does the metric perform (in terms of the ranking of methods) by using \\beta=1.0? We explored this issue and show the results in Figure 6i in our revised paper. As expected, at extreme values of \\beta (0 or \\infty) the methods may be ranked differently than by \\beta=1.0. However, encouragingly, a large range of \\beta values can be approximated well by using \\beta=1.0 . This shows the general robustness of our metric when used to compare methods for different applications.\n\n\nInclusion of the Adjusted Rand Index (ARI):\nBased on your suggestion, we have added evaluation with respect to the ARI in the revision, including discussion in the results section (Section 5 text, Fig 6). We found that the ARI tends to mediate the effect of changing the number of clusters compared to NMI as you’d suggested. However, it suffers from the same problems as NMI in evaluating temporal clusterings, without the benefit of having constituent criteria that can be analyzed and interpreted.\n\n\nDifficulty of analyzing Munkres: \nWe have improved the clarity of the argument against the difficulty of using the Munkres metric. The Munkres method has two main issues: (i) Since it relies on computing a matching between ground-truth labels and clusters, the score is agnostic to changes in clusters that are not matched with any ground-truth label. This “problem of matching” was pointed out in Rosenberg and Hirschberg (2007) and Meila (2007) for standard clustering settings. (ii) A contingency matrix is fed to the Munkres method for computing the optimal correspondences, ignoring temporal structure.\n\n\nSegment length generation: \nWe have updated our discussion to clarify that the generative process we propose (sample m Categoricals from the prior and sort them) is exactly equivalent to generating segment lengths from a Multinomial distribution (over m draws) with a Dirichlet prior. However, representing the process in the way we have written it improved inference efficiency with Gibbs sampling -- resampling the segment lengths requires only computing the likelihood of data points at segment boundaries, which is independent of the length of the time-series and far more efficient (we also discuss this in the appendix).", "Thank you for your encouraging comments! Here’s a response to the concerns that you raised,\n\nIncorporating nonparametric priors: \n\nWe completely agree incorporating a nonparametric prior would be an interesting extension to our approach. We chose not to incorporate this in our current work for several reasons:\n\n(i) Our contribution was to establish the benefit of incorporating additional assumptions about the underlying procedure with the ability to flexibly learn non-Markov procedures. Thus, keeping the model as simple as possible allows us to isolate the difference that our model makes compared to existing methods that typically make more restrictive, Markovian assumptions. Using a nonparametric prior adds an additional confounder, complicating our ability to understand whether the benefit is caused by the prior, or by the modeling assumptions used. We anticipated incorporating a nonparametric prior would offer no additional benefit beyond the ability to flexibly set some quantities based on data. Fox et al.’s central contribution was to describe new inference methods for such priors, while the focus of our work is different -- understanding where existing modeling fall short in modeling procedural data, and addressing them.\n\n(ii) Even without a nonparametric prior, Prism has the ability to ‘skip’ steps in the procedure. This can be realized since the model is able to set segment lengths to be 0 for some steps in the procedure. Thus, we can achieve at least some of the flexibility afforded by the nonparametric prior by setting the number of segments to be large. We have added further experimentation in the revision to show Prism’s sensitivity to the number of segments (s) -- see Figure 7. We found that Prism’s performance is relatively insensitive to the number of segments as long as it is larger than the number in ground-truth.\n\nDistinction from Fox et al: \n\nA related concern that was pointed out is how Fig. 5 is distinct from the work of Fox et al. Fox et al. primarily target recovering a faithful generative model for the sequences. In contrast we focus on identifying the latent structure in the given sequences. In particular, for a procedure identification setting, we describe how sharing a common procedure (not done in Fox et al.) and separating only the realizations (also different from Fox et al., which assumes a Markov stochastic transition matrix) can improve our ability to recover the latent segmentation. Technically these distinctions lie in how we model the data-generating process, specifically the local assignments of each data-point to a latent discrete cluster label. Fox et al. are concerned with the specification of and inference for, nonparametric priors that can be used with autoregressive generative HMM/SLDS models, in contrast to our work. We believe that identifying latent segmentation structure alone (even sans a generative model) is often of useful value, such as for the important application of activity understanding, or potentially for identifying building blocks in imitation learning.", "In \"Learning procedural abstractions and evaluating discrete latent temporal structure\" the authors develop a hierarchical Bayesian model for patterns across time in video data. They also introduce new metrics for understanding structure in time series (completeness and homogeneity). This work is appropriate for ICLR. They provide some applications to robotics, suggesting that this could be used to teach robots to act in environments by learning from videos.\n\nThis manuscript paid quite close attention to quality of segmentation, in which actions in videos are decomposed into component parts. It is quite hard to determine groundtruth in such situations and many metrics abound, and so a thorough discussion and comparison of metrics is useful.\n\nThe state of the art for Bayesian hierarchical models for segmentation is Fox et al., which is referenced heavily by this work (including the use of test data prepared in Fox et al.) I wonder why the authors drop the Bayesian nonparametric nature of the hierarchy in the section \"Modeling realizations in each time-series\" (i.e., for Fox et al., the first unnumbered equation in this section would have had arbitrary s).\n\nI found that the experiments were quite thorough, with many methods and metrics compared. However, I found the details of the model to be quite sparse, for example it's unclear how Figure 5 is that much different from Fox et al. But, overall I found this to be a strong paper.\n", "This paper describes two distinct contributions: a new compound criterion for comparing a temporal clustering to a ground truth clustering and a new bayesian temporal clustering method. Globally the paper is clear and well illustrated. \n1) About the new criterion:\n*pros: *\n a) as clearly pointed out by the authors, using standard non temporal clustering comparison metrics for temporal clustering evaluation is in a way \"broken by design\" as standard metrics disregard the very specificity of the problem. Thus the introduction of metrics that take explicitly into account time is extremely important.\n b) the proposed criterion combines two parts that are very important: finding the length of the stable intervals (i.e. intervals whose instants are all classified into a single cluster) and finding the sequence of labels. \n*cons:*\n a) while the criterion seems new it is also related to criteria used in the segmentation literature (see among many other https://doi.org/10.1080/01621459.2012.737745) and it would have been a good idea to discuss the relation between temporal clustering and segmentation, even briefly.\nb) the reliance on a tradeoff parameter in the final criterion is a major problem: how shall one chose the parameter (more on this below)? The paper does not explore the effect of modifying the parameter.\nc) in the experimental section, TSS is mostly compared to NMI and to optimal matching (called Munkres here). Even considering the full list of criteria in the appendix, the normalized rand index (NRI) seems to be missing. This is a major oversight as the NRI is very adapted to comparing clusterings with different number of clusters, contrarily to NMI. In addition, the authors claim that optimal matching is completely opaque and difficult to analyse, while on the contrary it gives a proper way of comparing clusters from different clusterings, enabling fine grain analysis. \n\n2) about the new model\n*pros*: \n a) as far as I know, this is indeed a new model\n b) the way the model is structured emphasizes segmentation rather than temporal dependency: the so called procedure is arbitrary and no dependency is assumed from one segment to another. In descriptive analysis this is highly desirable (as opposed to say HMM which focuses on temporal dependencies). \n*cons*\na) the way the length of the segments in the sequence are generated (with sorting) this a bit convolved. Why not generating directly those lengths? What is the distribution of those lengths under the sampling model? Is this adapted? \nb) I find the experimental evaluation acceptable but a bit poor. In particular, nothing is said on how a practitioner would tune the parameters. I can accept that the model will be rather insensitive to hyper-parameters alpha and beta, but I've serious doubt about the number of clusters, especially as the evaluation is done here in the best possible setting. In addition, the other beta parameter (of TSS) is not studied. \n\nMinor point:\n- do not use beta for two different things (the balance in TSS and the prior parameter in the model)" ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_ByleB2CcKm", "ryxNNJIs2m", "Ske7htL5AX", "B1x2lF85Rm", "ryxNNJIs2m", "HygJ8OLcCQ", "BkgfMLD42X", "Hyx08Tcp2Q", "iclr_2019_ByleB2CcKm", "iclr_2019_ByleB2CcKm" ]
iclr_2019_Bylmkh05KX
Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching
We consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus. We propose a fully unsupervised learning algorithm that alternates between solving two sub-problems: (i) learn a phoneme classifier for a given set of phoneme segmentation boundaries, and (ii) refining the phoneme boundaries based on a given classifier. To solve the first sub-problem, we introduce a novel unsupervised cost function named Segmental Empirical Output Distribution Matching, which generalizes the work in (Liu et al., 2017) to segmental structures. For the second sub-problem, we develop an approximate MAP approach to refining the boundaries obtained from Wang et al. (2017). Experimental results on TIMIT dataset demonstrate the success of this fully unsupervised phoneme recognition system, which achieves a phone error rate (PER) of 41.6%. Although it is still far away from the state-of-the-art supervised systems, we show that with oracle boundaries and matching language model, the PER could be improved to 32.5%. This performance approaches the supervised system of the same model architecture, demonstrating the great potential of the proposed method.
accepted-poster-papers
This paper is about unsupervised learning for ASR, by matching the acoustic distribution, learned unsupervisedly, with a prior phone-lm distribution. Overall, the results look good on TIMIT. Reviewers agree that this is a well written paper and that it has interesting results. Strengths - Novel formulation for unsupervised ASR, and a non-trivial extension to previously proposed unsupervised classification to segmental level. - Well written, with strong results. Improved results and analysis based on review feedback. Weaknesses - Results are on TIMIT -- a small phone recognition task. - Unclear how it extends to large vocabulary ASR tasks, and tasks that have large scale training data, and RNNs that may learn implicit LMs. The authors propose to deal with this in future work. Overall, the reviewers agree that this is an excellent contribution with strong results. Therefore, it is recommended that the paper be accepted.
test
[ "B1leQl1lRQ", "Hyl_IuId37", "rylDvvMp67", "H1ekM-ah67", "HJeZfyTnp7", "HyxpXfphpQ", "Skgk33o6hm", "BkxJWYi6n7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I want to thank the authors for addressing my concerns. I understand that their focus was not exactly the same as in previous work, but want to thank the authors for nevertheless adding the additional motivations and extra analysis. I believe that this will help situate this work better within this area, and also allow better comparison with other studies.\n\nI have changed my overall rating from a 6 to a 7.", "Overview:\n\nThis paper proposes a new approach to do unsupervised phoneme recognition by learning from unlabelled speech in combination with a trained phoneme language model. The proposed loss function is a combination of a term encouraging the language model of predicted phonemes to match the given language model distribution, and a term to encourage adjacent speech frames to be assigned to the same phoneme class. Phoneme boundaries are iteratively refined using a separate model. Experiments where a hidden Markov model is applied on top of the predicted phonemes are also performed.\n\n\nMain strengths:\n\nThe paper is clear and addresses a very important research problem. The approach and losses proposed in Section 2 have also not been proposed before, and given that an external language model is available, are very natural choices.\n\n\nMain weaknesses:\n\nThe main weakness of this paper is that it does not situate itself within the rich body of literature on this problem. I give several references below, but I think the authors can include even more studies -- there are several studies around \"zero-resource\" speech processing, and I would encourage the authors to work through the review papers [1, 6].\n\nConcretely, I do not think the authors can claim that \"this is the first fully unsupervised speech recognition method that does not use any oracle segmentation or labels.\" I think it could be argued that the system of [3] is doing this, and there are even earlier studies. I also don't think this claim is actually necessary since the paper has enough merit to stand on its own, as long as the related work is discussed properly.\n\nFor instance, the proposed approach shares commonalities with several other approaches: [2] also used two separate steps for acoustic modelling and boundary segmentation; [4, 7, 8] builds towards the setting where non-matching text data is available (for language model training) together with untranscribed speech for model development; the approach of [5] uses a very similar refinement step to the one described in Section 3, where an HMM model is initialised and retrained on noisy predicted labels.\n\nIn the experiments (Section 4), it would also be useful to report more fine-grained metrics. [6] gives an overview of several of the standard metrics used in this area, but at a minimum phoneme boundary recall, precision and F-scores should be reported in order to allow comparisons to other studies.\n\n\nOverall feedback:\n\nGiven that this paper is situated within the broader context of this research area, which already has a small community around it, I think the novelty in the approach is strong enough to warrant publication given that the additional metrics are reported in the experiments.\n\n\nPapers/links that should be reviewed and cited:\n\n1. E. Dunbar et al., \"The Zero Resource Speech Challenge 2017,\" in Proc. ASRU, 2017.\n2. H. Kamper, K. Livescu, and S. Goldwater. An embedded segmental k-means model for unsupervised segmentation and clustering of speech. in Proc. ASRU, 2017.\n3. Lee, C.-y. and Glass, J. R. A nonparametric Bayesian approach to acoustic model discovery. ACL, 2012.\n4. Ondel, Lucas, Lukaš Burget, Jan Černocký, and Santosh Kesiraju. \"Bayesian phonotactic language model for acoustic unit discovery.\" In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pp. 5750-5754. IEEE, 2017.\n5. Walter, O., Korthals, T., Haeb-Umbach, R., and Raj, B. (2013). A hierarchical system for word discovery exploiting DTW-based initialization. ASRU, 2013.\n6. M. Versteegh, X. Anguera, A. Jansen, and E. Dupoux, \"The Zero Resource Speech Challenge 2015: Proposed approaches and results,” in Proc. SLTU, 2016.\n7. https://www.clsp.jhu.edu/wp-content/uploads/sites/75/2018/05/jsalt2016-burget-building-speech-recognition.pdf\n8. https://www.clsp.jhu.edu/workshops/16-workshop/building-speech-recognition-system-from-untranscribed-data/\n\n", "We would like to thank all the anonymous reviewers for their helpful and constructive comments. We have provided detailed responses to each reviewer's comments and revised the paper based on their feedback. Here we summarize the changes in the newly uploaded paper.\n\n1. We added an experiment on Unsupervised Phoneme Segmentation comparing our refined segmentation results against existing unsupervised segmentation methods.\n2. We expanded our related work section greatly and discussed more related work regarding \"zero-resource\" speech processing.\n3. We have adjusted our claim regarding the first unsupervised speech recognition.\n4. We added more details on the implementation and data statistics (that are asked by the reviewers) in the supplementary material to enhance the clarity and reproducibility of our work.\n5. We fixed all the typological errors mentioned by the reviewers.\n", "We thank the reviewer for the constructive feedback! \n\n[Top-10000 of P_LM] We found that among the 48^5 elements in the P_LM only 69553 of them are non-zero. We also found that the top 10000 elements of P_LM account for about 48.7% of the total probability in P_LM. That is, this extremely small portion of the elements in P_LM occupy almost half of the total probability.\n\n[Balance weight of J_FS] As suggested by the reviewer, we conducted experiments by taking the square root of J_FS and obtain results for different values of \\lambda. We found that the best value of \\lambda becomes 1e-6, which is even smaller than the original value of 1e-5. The possible reasons are explained below. First, we observe that the value of J_FS is in fact much smaller than J_ODM (e.g., 0.15 vs 6.21). When taking the square root of J_FS, it becomes larger and we will need a smaller lambda to balance it. Second, the reason that we do not need a large lambda for J_FS is that it mainly plays the role of regularization. Ideally, if we sample all possible trajectories of \\tau in (1) and match their predicted output distribution to P_LM, then the prediction within each segment would also be close to each other. However, the number of all the possible trajectories \\tau in S_1 x S_2 x … S_K is exponentially large, and we cannot sample all of them in our training. Therefore, J_FS would play the role of a regularization that helps promote the consistency in the intro-segment predictions. For this reason, the regularization term J_FS does not need to be too large in practice. \n\n[Training time of unsupervised learning] In Figure 2(a), we show a learning curve of the validation error over training time when the segmentation is given by a supervised oracle. We can see that the total training could be completed in about an hour, which is similar to supervised learning. With unsupervised segmentation boundaries, it takes a longer time to converge, which is usually 4-5 hours. \n\n[Typos and minor issues] We have fixed the typos in Eq (2).\n", "We thank the reviewer for the constructive feedback! \n\n[Future work] As suggested by the reviewer, in future work, we intend to extend our method to large-scale ASR tasks (e.g., Switchboard), and evaluate the performance in word error rate. We will also evaluate how much a larger dataset could close the performance gap between the supervised and unsupervised methods.\n\n[Extension to RNN LM & AM] We use N-gram phoneme LM because it is simple to implement and it achieves satisfactory performance. To extend to RNN-LM, we need to develop an effective way of computing the sum over z (i.e., different N-grams) in (1). For RNN-LM, the probability for each N-gram is not explicitly given. Instead, we need to score it recursively: we can treat each N-gram as a length-N sequence and use RNN to score its log-likelihood, which, after taking the exponential, becomes the probability of the N-gram. In addition, we may use beam search to pick a subset of N-grams with the corresponding probabilities scored by RNN to compute the sum in (1) approximately. For RNN-AM, we could replace the current DNN acoustic model by RNN, which will generate output distribution at each frame, then we can apply the same objective function (1).\n\n[SGD with large mini-batch] We use mini-batch SGD by dynamically increasing the batch-size. We observe that this optimization strategy empirically converges much faster than the Stochastic Primal-Dual Gradient (SPDG) and reaches a better converging point in our experiments. The reason that Liu et al. cannot reach satisfactory result by SGD may be that they did not dynamically increase the batch size during training and the difference in dataset statistics. \n\n[Training time] In Figure 2(a), we show a learning curve of the validation error over training time when the segmentation is given by a supervised oracle. We can see that the total training could be completed in about an hour, which is similar to supervised learning. With unsupervised segmentation boundaries, it takes a longer time to converge, which is usually 4-5 hours. As pointed out by the reviewer, the computation complexity for estimating a stochastic gradient is O(10000) for the sum over z in (1). However, since we use a very large mini-batch (about 20K), this computation complexity is amortized by a large mini-batch, i.e., the per sample complexity is low. In addition, this computation could also be highly parallelized in GPU.\n\n[Segmentation quality] As suggested by the reviewer, we evaluate the precision, recall, F-score and R-value of our refined boundaries and compare them to the baselines (including Wang et al. 2017). Please refer to Table 3 of our revised paper for the full results. Below, we list the results of our method and the results from Wang et al. 2017. We can see that by refining the initial boundaries provided by (Wang et al. 2017), we improve the quality of the segmentation boundaries, which leads to better PER.\n\n\t\t Initial boundary (Wang et al.)\t\tOur refined boundaries\nR-value\t 82.6\t\t\t\t\t 84.8\nF-score\t 80.1\t\t\t\t\t 82.6\nRecall\t\t78.2\t\t\t\t\t 80.9\nPrecision\t82.3\t\t\t\t\t 84.3\n\n[sum over \\tau] Since we are learning our model by SGD, we estimate our stochastic gradient by sampling this sum over \\tau. Specifically, we will randomly sample one \\tau at the beginning of each epoch during the training process.\n\n[Periodic spikes in Figure 2(a)] We train the model according to a fixed schedule of hyperparameters. In Figure 2(a), the mini-batch sizes gradually increase from 5000 to 20000 and the softmax temperature is decreased from 0.8 to 0.5 (i.e., gradually becomes sharper). Furthermore, in each stage, the learning rate also decays from an initial value. When the next stage begins (i.e., the position at the spikes in the figure), the learning rate will revert back to its initial value. Therefore, we believe the spikes come from the sudden increase in the learning rate.\n\n[Choice of self-validation loss] We use Eq (1) instead of Eq (3) as the self-validation loss because we will also need to tune the hyperparameter \\lambda and the cost in (3) depends on \\lambda.\n\n[Typos] We have fixed the typos in Eq (5), Figure 2 and Conclusion.\n", "We thank the reviewer for the constructive feedback! \n\n[Related works] Thanks for the suggestion, we will adjust the claim in our revised paper. We have also incorporated these related works and discuss them thoroughly in our related work section. However, the mentioned works (including [3]) have a different focus compared to our work on unsupervised speech recognition. Specifically, they are focused on unsupervised acoustic unit discovery (AUD), i.e., finding the segmentation boundaries for the acoustic units (e.g., word and subword) and clustering the discovered units. They did not classify acoustic inputs into phoneme or word labels in an unsupervised manner during the inference stage. In contrast, we are interested in directly learning a speech recognition model in an unsupervised manner without an intermediate clustering step; that is, our learned model will directly recognize acoustic features into phoneme labels. The estimation of the segmentation boundaries in our work is to help the training of the recognition model. In fact, although we use the segmentation boundaries generated by (Wang et al. 2017) as our initial boundaries, we could also potentially use other acoustic unit discovery methods suggested by the reviewer to initialize our algorithm, just as (Wang et al. 2017). Regarding the work in [2], although it also iterates between acoustic modeling and boundary segmentation, their acoustic modeling is mainly for cluster assignment while our work directly learns the phoneme recognition model.\n\n[Additional evaluation metrics] As suggested by the reviewer, we evaluate the precision, recall, F-score and R-value of our refined boundaries and compare them to the baselines (including Wang et al. 2017). Please refer to Table 3 of our revised paper for the full results, where our method outperforms all the other baselines by a significant margin. Below, we list the results of our method and the results from Wang et al. 2017. We can see that by refining the initial boundaries provided by (Wang et al. 2017), we improve the quality of the segmentation boundaries, which is consistent with the improved phoneme error rate (PER) in Table 2.\n\n\t\t Initial boundary (Wang et al.)\t\tOur refined boundaries\nR-value\t 82.6\t\t\t\t\t 84.8\nF-score\t 80.1\t\t\t\t\t 82.6\nRecall\t\t78.2\t\t\t\t\t 80.9\nPrecision\t82.3\t\t\t\t\t 84.3\n\nHowever, we emphasize that our method is designed towards unsupervised speech recognition rather than unsupervised phoneme segmentation. Estimating the segmentation boundary only serves as an auxiliary subtask to help the training of the recognition model in an unsupervised manner. And in the testing stage, we do not estimate the segmentation boundaries for the test data. Instead, our trained model could be directly used with decoder just as any supervised recognition model would do. For this reason, the most important evaluation metric for our method is the phoneme error rate (PER). Nevertheless, the above boundary evaluation results do confirm that the improved segmentation quality indeed leads to better phoneme recognition performance (see Table 2), which also demonstrates the effectiveness of our iterative algorithm.\n", "This paper presents a method to learn an acoustic model for phoneme recognition with only the training input acoustic features and a pretrained phoneme LM. This is done by matching the output phoneme sequence distribution of the training set with the phoneme LM distribution. The cost function is proposed by extending a previously proposed unsupervised cost function (Empirical-ODM) to the segmental level, and integrating an intra-segment cost function to encourage the frame-wise output distribution to be similar to each other within a segment. The authors conducted thorough experiments on TIMIT phoneme recognition and demonstrated impressive results.\n\nThe paper is technically sound and the presentation is generally clear. The idea is interesting and novel by extending a previous unsupervised sequence modeling approach to speech recognition and exploiting the segmental structure of the problem. Unsupervised learning is an important research topic, and its application to potentially save high cost of human labeling for developing ASR systems is important to the community.\n\nHere are a few general comments/questions:\n\n1. It would be interesting to see whether and how much using a larger acoustic training set and a phoneme LM trained on more data can close the gap between unsupervised and supervised performance. Also it would be great to see how well the learned acoustic model performs in a full ASR system together with the lexicon and LM to predict words, which could generate more accurate unsupervised transcript than the acoustic model itself for refining the model further. These could be done in future work.\n\n2. The current cost function is based on matching the N-gram distribution in the phoneme LM and that in the DNN acoustic model output of the training set, where N is relatively small. How could the framework be extended for the state-of-the-art LM and AM with RNNs where the history is arbitrarily long?\n\n3. Why can this paper just use a larger mini-batch size to alleviate the effect that SGD is intrinsically biased for the Empirical-ODM functional form, while Liu et al. 2017 needed to propose the Stochastic Primal-Dual Gradient approach?\n\n4. The paper compares the unsupervised cost function with the supervised cross-entropy function in terms of quality. How about training time? The computation looks expensive for the unsupervised case since it needs to go through all possible N-grams (which is approximated by the most frequent 10000 5-grams according to Appendix B but still a large space).\n\n5. If the segmentation quality affects the learned acoustic model quality, why not also report the segmentation accuracy for all unsupervised systems and iterations, including the Wang et al. 2017 system?\n\nMore specific comments:\n\n7. The outer summation in Eq(1) seems to indicate summing over all possible \\tau, which is infeasible. Please clarify how it is computed.\n8. Eq(5): \"p(y_t | y_1 ... y_t)\" should be \"p(y_t | y_1 ... y_{t-1})\".\n9. Why are there periodic spikes in both self-validation loss and validation FER in Figure 2(a)? What training stage do they correspond to?\n10. In Figure 2, \"validation error\" in the y-axis should probably be \"validation FER\". In Figure 2(b), the number ranges on the left and right of the y-axis were probably swapped.\n11. Section 2.5: why is Eq(1) instead of Eq(3) used for the self-validation loss?\n12. Conclusion: \"the a potential\" -> \"the potential\".", "This paper proposes fully unsupervised learning algorithm for speech recognition. It involves two alternating trained component, a phoneme classifier, and a boundary refining model. The experiment results demonstrate that it achieves first success on speech recognition that approaches the supervised learning performance. \n\nPros:\n+ The paper propose to use a frame-wise smoothing term J_FS added on J_ODM cost. In the new cost function, J_ODM controls the coarse-grained inter-segment distribution using a prepared language model P_LM, while J_FS controls the fine-grained intra-segment distribution. It is actually benefit to take use of this hierarchical 2-level scopes than only 1-level scope on evaluate the distribution mismatch in the cost function. Because otherwise, if only focus on fine-grained frame level, much larger number of frame labels and longer N-gram have to be considered to evaluate the distribution of phoneme. Consequently, the computation can be exploding. \n+ The proposed unsupervised phoneme classification method is superior to the baseline (Liu et al., 2018) because the baseline relies on a clustering which is upper-bounded by cluster purity. Directly optimize on \\theta using an end-to-end scheme is preferred. \n+ I like the idea to use an iterative training algorithm to jointly improve classifier parameter \\theta and segment boundaries b. \n+ It is quite impressive that unsupervised learning system get close to performance of supervised system on speech recognition. The proposed system also outperforms state-of-the-art baseline with large margin. \n+ The settings of experiments are rather comprehensive. Especially the “non-matching language model”, tests the case where language model cannot directly estimated from training set. \n\nQuestions:\n1.\tIn Appendix B you mentioned that for the N-gram you choose N=5. So the original language model P_LM can be a high-dim matrix with exactly 39^5 elements. How sparse is the original P_LM? It describes that 10000 elements are chosen, which are only 0.001%(=10000/39^5) of elements in the original one. How representative are they?\n\n2.\tI notice for the balance weight of J_FS in (3), you empirically take the best \\lambda=1e-5 during experiment. To me, the scale of optimal \\lambda is such small value maybe because the order of J_FS is improperly determined. My suggestion is, could you try using square root on the current J_FS, or using standard deviation of intra-segment outputs. The reasons are, first, minimizing std is a more interpretable penalty on diversion in a same segment; second, since you have used mean of outputs in J_ODM, then it is better to use a same dimension statistics, such as std of outputs in J_FS rather than sum of squared differences, when you combine J_ODM and J_FS in a uniform cost.\n\n3.\tWhat is the time complexity of running a comparable supervised speech recognition task with unsupervised learning method? \n\nMinor issues:\nMaybe it is a typo that the second term of Eqn (2) should be “-p_\\theta(y_(t+1)=y|x_(t+1))” instead? Since the p_\\theta is defined as posterior probability of the frame label given the corresponding input. \n" ]
[ -1, 7, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "HyxpXfphpQ", "iclr_2019_Bylmkh05KX", "iclr_2019_Bylmkh05KX", "BkxJWYi6n7", "Skgk33o6hm", "Hyl_IuId37", "iclr_2019_Bylmkh05KX", "iclr_2019_Bylmkh05KX" ]
iclr_2019_Bylnx209YX
Adversarial Attacks on Graph Neural Networks via Meta Learning
Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers.
accepted-poster-papers
The paper proposes an method for investigating robustness of graph neural nets for node classification problem; training-time attacks for perturbing graph structure are generated using meta-learning approach. Reviewers agree that the contribution is novel and empirical results support the validity of the approach.
train
[ "SkeoiDVcAX", "HJe1c7y_27", "rJlQtgaW0m", "B1xHHl6Z0m", "BJxOYyT-0m", "Skli62kQ6Q", "BkltS1uMpQ", "H1lFhSrF3X", "HkeXZtLM2m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The authors have made efforts in addressing my concerns and have improved their paper. ", "This paper studies the problem of learning a better poisoned graph parameters that can maximize the loss of a graph neural network. The proposed using meta-learning to compute the second-order derivatives to get the meta-gradients seems reasonable. The authors also proposed approximate methods to compute the graph as learning parameters, which could be more efficient since the second-order derivatives are no longer computed. The experimental results on three graph datasets show that the proposed model could improve the misclassification rate of the unlabeled nodes.\n\nThe paper is well-written. It would be good if the authors could address the following suggestions or concerns:\n\n1) The proposed attack model assumes the only the graph structure are accessiable to the attackers, which might limit the proposed model in real applications. Joint study with the graph features would be useful to convince more audience and potentially have larger impacts.\n\n2) In the self-learning setting, in order to define l_atk, l_self is used, however, l_self is using v_u, which is the ground truth label of the test nodes based on my understanding, so this approach is using labels of the unlabeled data, which might be not applicable in real world.\n\n3) About the action space, based on the constraints of the attacker's capability, the possible attacks will be significantly smaller than O(N^2 delta), might be O(N^delta).\n\n4) Change 'treat the graph structure as a hyperparameter' to 'treat the graph structure tensor/matrix as a hyperparameter' would be earier to understand. And is the graph structure tensor with shape (NXN)? \n\n5) What's the relationship between T and S? Are T in theta_T is the same as the S in G_S?\n\n6) The title of section 4.2 is misleading. It would be better to name it as 'Greedy Computing Meta-Gradients'. \n\n7) It lacks intuition of why define S(u,v)=delta . (-2.a_uv+1). '(-2.a_uv+1)' looks lack of intuition. Please also change 'pair (i,j), we define S(u,v)' -> 'pair (u,v)'.\n\n8) In the experiments, what's the definition of meta-train? l_atk=-l_train?\n\n9) In the experiments, it would be interesting to study the impact of unnoticaability constraints on the model results.\n\n10) In figure 1, it is not surprising that when increasing the number of edges changed, the misclassification rates will increase. A graph NN considers more graph features rather than the structure is expected to show the impact of the graph structure change.\n\nI have read the authors' detailed rebuttal. Thanks.", "Dear Reviewer 2,\n\nThank you for your constructive feedback and suggestions. We have run experiments on a larger dataset with roughly 20K nodes and found that our attacks are also successful in this scenario. You can find the results in Table 8 in Appendix F of the updated manuscript. Furthermore, we have included a discussion on the complexity of our approach in Appendix C in the updated manuscript. \n\nRegarding your question about the transferability to other graph embedding algorithms: We would like to point out that we already evaluate the impact of our attacks on DeepWalk. Our experiments show that our method’s adversarial attacks also transfer to DeepWalk.\n", "Dear Reviewer 1,\n\nThank you for your detailed and constructive feedback. We have used your suggestions to improve the paper and have uploaded the updated manuscript.\n\nWe would like to address each point individually here:\n\n1) Based on your suggestion, we ran experiments on Citeseer where we use meta gradients to modify the graph structure and features simultaneously. We evaluated on GCN and CLN (DeepWalk does not use features) and we observed that the impact of the combined attacks is comparable but slightly lower (GCN: 38.6 vs 37.2, CLN: 35.3 vs 34.2; structure-only vs combined). We attribute this to the fact that we assign the same ‘cost’ to structure and feature changes, but arguably we expect a structure perturbation to have a stronger effect on performance than a feature perturbation. We have summarized these findings in Appendix E of the updated manuscript.\n\n2) We would like to emphasize that the attack model does *not* have access to the ground-truth labels of the unlabeled nodes V_u. We use the labels of the labeled nodes to train the surrogate classification model and predict the labels \\hat{C}_u of the unlabeled nodes. These labels are then treated as the ‘ground truth’ for the self-training loss L_self. Thus, the attack never uses or has access to the labels C_u of the unlabeled nodes.\n\n3) We agree that the set of admissible attacks is significantly smaller than O(N^{2 delta}). However, since it is challenging to derive a tighter upper bound on the size of the set of admissible perturbations, we decided to use this conservative upper bound. The main point we wanted to make (which also holds for a tighter bound) is that there is an exponential growth in the number of perturbations, i.e. exhaustive search is infeasible.\n\n4) Thank you for this suggestion. We have updated the manuscript to make this point more clear. Yes, the dimensionality of the adjacency matrix is NxN.\n\n5) T is the number of inner optimization steps (i.e., gradient descent steps of learning the surrogate model). S is the number of meta-steps on the graph structure. We have replaced G^(S) by G^(delta) in the manuscript to avoid confusion.\n\n6) Thank you for raising this point. We have changed the section title to ‘Greedy Poisoning Attacks via Meta Gradients’ in the updated manuscript.\n\n7) We have changed (i,j) to (u,v). A negative gradient in an entry (u,v) means that the target quantity (e.g. error) increases when the value is decreased. Decreasing the value is only admissible for node pairs connected by an edge, i.e. we change the adjacency matrix entry from a 1 (edge) to a 0 (no edge). To account for this, we flip the sign of gradients of node pairs connected by an edge, as achieved by multiplying by (-2a_uv+1). This enables us to use the arg max operation later. Equivalently, we could compute the maximum of the gradients where there is no edge and the minimum where the nodes are connected, and then choosing the entry with the higher absolute value as the perturbation.\n\n8) You are correct, Meta-Train uses l_atk=-l_train. \n\n9) We have added an experiment to Appendix D showing the effect of the unnoticeability constraint (see Figure 4). As shown, even when enforcing the constraints the attacks have similar impact. Thus we conclude that the constraint should always be enforced since they improve unnoticeability while at the same time our attacks remain effective.\n\n10) We agree that an increasing misclassification rate is expected when increasing the number of edges changed. Our intention in Figure 1 was to visualize this relationship and, more importantly, to show that our attacks consistently outperform the DICE baseline that has access to all class labels, i.e. more information than our method.\n", "Dear Reviewer 3,\n\nThank you for your constructive feedback and suggestions. We used your suggestions to improve the manuscript.\n(1+3) We have added an algorithm summary and complexity discussion to the appendix. \n(2) As Reviewer 1 also requested information about graph attribute attacks, we ran experiments on Citeseer where we use meta gradients to modify the graph structure and features simultaneously. We evaluated on GCN and CLN (DeepWalk does not use features) and we observed that the impact of the combined attacks is comparable but slightly lower (GCN: 38.6 vs 37.2, CLN: 35.3 vs 34.2; structure-only vs combined). We attribute this to the fact that we assign the same ‘cost’ to structure and feature changes, but arguably we expect a structure perturbation to have a stronger effect on performance than a feature perturbation. We have summarized these findings in Appendix E of the updated manuscript.\n\nRegarding your question about the benefit of meta-learning: Meta learning is a principle that enables us to directly tackle the bilevel optimization problem. That is, the meta gradient gives us an indication of how the value of the outer optimization problem will change when modifying the input to the inner optimization problem (i.e. the classifier training). This proves to be a very powerful principle for poisoning attacks (essentially a bilevel optimization problem) on node classification as we show in our work.\n", "Dear commenter,\n\nWhile we appreciate any constructive feedback and questions on OpenReview, we have the impression that you have not read our paper. Still, since your comment contains various incorrect claims, we address your points here:\n\n1) Graph neural networks are NOT a special case of networks for text classification. If at all, they are generalizations. We recommend to read the broad literature on graph neural networks to clarify your confusion (references are mentioned in our paper). Here we just want to point out two important differences: (i) The neighborhood in graphs is not ordered; unlike text/images where you have before-after/left-right-up-down information. (ii) The interaction structure in graphs, i.e. the edges, is an explicit part of the data (i.e. observed) -- while in text it is NOT. Put simply: The graph structure is part of the data and, thus, can be manipulated. This is what we consider in our work.\n\n2) You are linking to a discussion which does NOT apply to our setting. (i) It talks about text classification. (ii) The discussion you are linking to claims that text classification can easily be fooled (e.g. just simple random perturbations). Simple perturbations, however, do NOT have a strong effect on graph neural networks. This result was already clearly shown by other graph attack papers (see again the references in our paper). We also compare to strong baselines (including a random one) in our work which are consistently outperformed by our method.\n\n3) Your statement “it is even easier to fool graph neural networks” is simply incorrect. Due to (1) you cannot make any direct conclusion from text to graphs and due to (2) it has been shown that it is NOT easy to fool graph neural networks (e.g. with random perturbations). Due to the challenging nature of achieving graph attacks, we need more advanced principles -- like the one proposed in our paper.", "Graph neural networks are just special cases of neural networks for classifying text (which is just a chain graph). To generate text that fools state-of-the-art classifiers one doesn't need to do much, and certainly not the method used in the paper (see e.g. the discussion in https://openreview.net/forum?id=ByghKiC5YX&noteId=B1xno5Dz6X). It is therefore quite obvious that it is even easier to fool graph neural networks, so why all the fancy methods?", "This paper proposes an algorithm to alter the structure of a graph by adding/deleting edges so as to degrade the global performance of node classification. The main idea is to use the idea of meta-gradients from meta-learning to solve the bilevel optimization problem. \n\nThe paper is clearly presented. The main contribution is to use meta-learning to solve the bilevel optimization in the discrete graph data using greedy selection approach. From the experimental results, this treatment is really effective in attacking the graph learning models (GCN, CLN, DeepWalk). However, the motivation in using meta-learning to solve the bilevel optimization is not very clear to me, e.g., what are the advantages it can offer?\n\nTheoretically, the paper could have given some discussion on the optimality of the meta-gradient approach to bilevel optimization to strengthen the theoretical aspect. For the greedy selection approach in Eq (8), is there any sub-modularity for the score function used?\n\nSome minor suggestions and comments:\n1) please summarize the attacking procedures in the form of an algorithm\n2) please have some discussion on attacking the graph attributes besides the structure\n3) please have an complexity analysis and empirical evaluations of the meta-gradient computations and approximations", "This paper studied data poisoning attacking for graph neural networks. The authors proposed treating graph structures as hyperparameters and leveraged recent progress on meta-learning for optimizing the adversarial attacks. Different from some recent work on adversarial attacks for graph neural networks (Zuigner et al. 2018; Dai et al. 2018), which focus on attacking specific nodes, this paper focuses on attacking the overall performance of graph neural networks. Experiments on a few data sets prove the effectiveness of the proposed approach. \n\nStrength:\n- the studied problem is very important and recently attracting increasing attention\n- Experiments show that the proposed method is effective.\n\nWeakness:\n- the complexity of the proposed method seems to be very high\n- the data sets used in the experiments are too small\nDetails:\n-- the complexity of the proposed method seems to be very high. The authors should explicitly discuss the complexity of the proposed method. \n-- the data sets in the experiments are too small. Some large data sets would be much more compelling.\n-- Are the adversarial examples identified by the proposed method transferrable to other graph embedding algorithms (e.g., the unsupervised node embedding methods, DeepWalk, LINE, and node2vec)?\n-- I like Figure 3, though some concrete examples would be more intuitive. " ]
[ -1, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "BJxOYyT-0m", "iclr_2019_Bylnx209YX", "HkeXZtLM2m", "HJe1c7y_27", "H1lFhSrF3X", "BkltS1uMpQ", "iclr_2019_Bylnx209YX", "iclr_2019_Bylnx209YX", "iclr_2019_Bylnx209YX" ]
iclr_2019_ByloIiCqYQ
Maximal Divergence Sequential Autoencoder for Binary Software Vulnerability Detection
Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security. However, most of the work in binary code vulnerability detection has relied on handcrafted features which are manually chosen by a select few, knowledgeable domain experts. In this paper, we attempt to alleviate this severe binary vulnerability detection bottleneck by leveraging recent advances in deep learning representations and propose the Maximal Divergence Sequential Auto-Encoder. In particular, latent codes representing vulnerable and non-vulnerable binaries are encouraged to be maximally divergent, while still being able to maintain crucial information from the original binaries. We conducted extensive experiments to compare and contrast our proposed methods with the baselines, and the results show that our proposed methods outperform the baselines in all performance measures of interest.
accepted-poster-papers
* Strengths This paper applies deep learning to the domain of cybersecurity, which is non-traditional relative to more common domains such as vision and speech. I see this as a strength. Additionally, the paper curates a dataset that may be of broader interest. * Weaknesses While the empirical results are good, there appears to be limited conceptual novelty. However, this is fine for a paper that is providing a new task in an interesting application domain. * Discussion Some reviewers were concerned about whether the dataset is a substantial contribution, as it is created based on existing publicly available data. However, these concerns were addressed by the author responses and all reviewers now agree with accepting the paper.
train
[ "H1eNrWoxyV", "BJga1_cxyN", "S1l--zJuTm", "SklIUa9mAm", "SkgU5i5m0Q", "r1e4s7lT6Q", "SkeGRelT6X", "Hke2AkxTpX", "ByxCw4yT67", "ryeuP2Ikam", "Skl_mb8yTX", "SkevBv5dhm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for your decision. We appreciate this.", "Based on the response and the revision, I would like to increase my score to 6.", "This paper proposes a variational autoencoder-based architecture for binary code embedding. For evaluation, they construct a dataset by compiling source code in the NDSS18 dataset. They evaluate their approaches against several neural network baselines, and demonstrate that their learned embeddings are more effective at distinguishing between vulnerable and non-vulnerable binary code.\n\nThe application of deep representation learning for (binary) vulnerability detection is a promising direction in general. Meanwhile, the authors did a quite comprehensive comparison with neural network baselines for embedding representation. However, I have several questions and concerns about the paper:\n\n- The contributions of this paper are unclear to me. The authors claim that a main contribution is their dataset. I agree that this is a contribution, but since this dataset is built upon an existing dataset with source code, and the dataset construction techniques themselves are not novel, especially for machine learning community, I do not see a significant contribution in this part.\n\n- The proposed approach is new, but the technical novelty is marginal. I think this model design is not specific to the binary vulnerability detection, but should also be applicable to other vulnerability detection settings, e.g., the original NDSS18 dataset. It would be great if the proposed approach also performs better on other vulnerability detection tasks than the baselines.\n\n- What would be the performance of using hand-designed features on the same benchmark? If the proposed approach learns better embeddings, any intuition on what additional information is captured by the learned embeddings?\n\nMinor suggestions: The paper needs an editing pass to fix some typos. Also, the authors seem to setup the paper template in a wrong way, and may need to consider fixing it.", "Many thanks for your response and comment. We are doing the proofreading for the appendix and going to submit the revised version as soon as possible.", "An appendix with with (1) a detailed explanation of the compilation process and (2) multiple code examples (vulnerable, non-vulnerable, compiles successfully, cannot be compiled) will significantly improve this paper. I think, where possible, try to integrate this information (even at a summarized high level) into the text of the paper itself, since it will remove some of the mystery I experienced reading it. ", "We are grateful for the reviewer’s constructive comments. \n\n* The contributions are unclear to me. The authors claim that the main contribution is their dataset. I agree that this is a contribution, but since this dataset is built upon an existing dataset with source code, and the dataset construction techniques themselves are not novel, especially for machine learning community\n- Deep learning has enjoyed great success in the domains of computer vision, speech recognition, and natural language processing. However, deep learning has only had limited applications in the cybersecurity domain, especially in the case of binary software vulnerability detection — an important and difficult problem in cybersecurity. The main reason is due to the lack of qualified binary datasets for the vulnerability detection task. Our contribution is to create a qualified binary dataset for this task and propose a new method that results in good performance. \n- It is arguable that the dataset is built upon an existing dataset with source code, hence the task of creating the binary dataset is perceived to be trivial. However, the various source codes in vulnerability detection are collected from many different software libraries, packages are always incomplete with many missing variables, data types, functions, class declarations and so on. Each missing code item might itself have large variations in its prototype and signature. Our tool needs to parse the source codes and be aware of the relationship between the missing code item/chunk and its role in order to fix the source code. Overall, this task can be quite complicated.\n- We believe that the two contributions alone justify the paper. The contributions are complementary, additive and they promote the study of deep learning (machine learning) for binary software vulnerability detection — an application of cybersecurity that is significantly challenging wherein deep learning (machine learning) has been restrictively applied. However, we have shown that deep learning can be successfully applied to this problem.\n\n* It would be great if the proposed approach also performs better on other vulnerability detection tasks than the baselines\n- In practice, binary vulnerability detection is more relevant and impactful than source code vulnerability detection. The reason is that when using a commercial application, we generally only possess its binary code and usually not its source code. Because of license copyrights, the binary code cannot be decompiled back to the source code for vulnerability detection. We, therefore, need to detect vulnerabilities at the binary level. Our proposed method can also be adapted to source code vulnerability detection and we believe that the proposed method should perform better than the baselines in this application. In this paper, we mainly focus on binary vulnerability detection, which is harder and more applicable than source code vulnerability detection. As an aside, source code vulnerability detection should also have a better performance than the equivalent binary code vulnerability detection due to the fact that a lot of information (e.g., semantics) is stripped from the source code during compilation.\n\n* The performance of using hand-designed features on the same benchmark? If the proposed approach learns better embeddings, any intuition on what additional information is captured by the learned embeddings?\n- Grieco et al. 2016 proposed using dynamic features extracted from the execution of binaries and static features extracted from the binary programs and then trained a feed-forward neural network for classification. They undertook experiments using their own dataset VDiscovery with 1039 binaries. They claimed in that paper that they had to collect their own datasets because they found no suitable datasets to perform the evaluation of their technique. We cannot compare with this method because its code is not available.\n- The two key aspects that contribute to the success of our proposed model are i) the capability to reconstruct sequential binary codes from their latent representations via the VAE formulation and ii) the ability to maximize the divergence of latent representations by “pushing away” the two learnable priors. The first aspect ensures that the latent representations are able to capture the crucial information in the original binary codes. We observe that a vulnerable binary and its fixed version only differ by a few machine instructions, hence the ability to be able to reconstruct is important to differentiate a vulnerable binary and its fixed version in the latent space because the model needs to pay attention to the slight difference in vulnerable and fixed binaries to successfully reconstruct them. In addition, by maximizing the divergence between two learnable priors, the latent representations of vulnerable and non-vulnerable binaries are encouraged to be maximally divergent for classification purpose.\n", "We are grateful for the reviewer’s constructive comments.\n\n* The operation that creates dataset may introduce bias or variance. (The developed tool that automatically detects the syntactical errors in a given piece of source code, fixes them, and finally compiles the fixed source code into binaries, may change the distribution of data.) Why not follow the way of producing a dataset of malware detection or other tasks that using binary code.\n- The task of creating a binary dataset for vulnerability detection is much harder and complex compared to that of malware detection. For malware detection, one can easily collect infectious binaries, while vulnerable software code requires domain experts with relevant expertise to inspect the software and labeling them (identify the location(s) of the vulnerability, the vulnerability type etc.). Since binary codes are less informative and even experts are generally unable to label them directly, only vulnerable source codes are available. However, the process to collect vulnerable source code is labor-intensive and manual since one needs to go to the CVE website (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-9989) and then navigate to relevant websites to manually collect the source code. Due to the nature of data collection, the various source codes in vulnerability detection are collected from many different software libraries, packages are always incomplete with many missing variables, data types, functions, class declarations and so on. Each missing code item might itself have large variations in its prototype and signature. Our tool needs to parse the source codes and be aware of the relationship between the missing code item/chunk and its role in order to fix the source code given the fact that the source codes have not compiled yet. It is worth noting that for compilable source code, one can use the ROSE parser (https://en.wikipedia.org/wiki/ROSE_(compiler_framework) ) to completely parse their structures. \n- To fix any source code, we first compiled the code using the gcc/g++ (MinGW) compiler and based on the error messages to gradually fix the code. For example, when the compiler gave an error message for a missing declaration of an identifier, our tool needs to parse the source code to understand the type of this identifier (e.g., a variable, data type, function, class) and then provide the declaration for this identifier. Still, the prototypes of data types, functions, and classes might be various enormously and be complex. Our automatic tool tackles these issues.\n- Our tool does not change the execution flow, semantic and syntactic structure of the source code which are crucial in identifying vulnerabilities. Thus, the fixing and compiling process does not affect vulnerabilities inside the source codes.\n\n* It seems that the proposed model fails to consider the properties of the binary codes in this task. It would be more interesting if some design incorporates the special properties of the task.\n- Source code has useful information with the syntax and semantic information provided by high-level programming languages and one can take advantage of the syntax and semantic information to construct tree structures like, for example, the abstract syntax tree. In contrast, a binary code has reduced syntax and semantic information wherein they can be only viewed as a sequence of machine instructions or bytes.\n- In our proposed method, the embedding component (see Section 3.1) of the machine instruction format was designed to capture the properties of the binary code. We observed a significant improvement in predictive performance when using this embedding technique instead of using a standard word embedding. \n\n* The discussion in Figure 2 and equations are unclear. More explanations are needed. e.g. how to testing with label-unknown data.\n- The key aspects of our method are i) how to develop the sequential VAE to encode binaries and ii) how to maximize the latent codes of the different classes by “pushing away” the learnable priors. For testing a binary with an unknown label, we first feed it into the RNN encoder to work out its latent code and then use the classifier acting on the latent space to classify the binary. We will revise this technical section to clarify.\n\n* Many typos are found. E.g., : the given given -> the given; k = 1,2 should be k = 0,1 \n- Thanks. We will fix them in the revised version.\n", "We are grateful for the reviewer’s constructive comments.\n\n* Dataset and challenging points\n- Providing a new dataset for binary software vulnerability detection (SVD) is one of the key contributions of this paper. Deep learning has enjoyed great success in the domains of computer vision, speech recognition, and natural language processing. However, deep learning has only had limited applications in the cyber security domain, especially in the case of binary SVD — an important and difficult problem in cybersecurity. The reason is due to the scarcity of qualified datasets for source and binary code. For collecting vulnerable source code, one needs to access the CVE website and then navigate to the relevant websites to manually collect the source code. Due to the nature of data collection, the various source codes in vulnerability detection are collected from many different software libraries, packages are always incomplete with many missing variables, data types, functions, class declarations and so on. The challenge here is how to parse the source code to determine its structure given the fact that the source code has yet to be compiled. Note that for compilable source code, one can use the ROSE parser to completely parse the code ’s structure. \n- To fix any source code, we first compiled the code using the gcc/g++ (MinGW) compiler and based on the error messages to gradually fix the code. For example, when the compiler gave an error message for a missing declaration of an identifier, our tool needs to parse the source code to understand the type of this identifier (e.g., a variable, data type, function, class) and then provide the declaration for this identifier. Still, the prototypes of data types, functions, and classes might be various enormously and be complex. Our automatic tool tackles these issues. We provide more clarification and explanation of how our tool works in the appendix section of the revised version.\n\n* Necessity of SVD for binary codes\n- In practice, binary vulnerability detection (VD) is more relevant and impactful than source code vulnerability detection. The reason is that when using a commercial application, we generally only possess its binary code and usually not its source code. Because of license copyrights, the binary code cannot be decompiled back to the source code for VD. We, therefore, need to detect vulnerabilities at the binary level.\n \n* SVD is easier from source code or from binary code?\n- Compared with source code VD, binary code VD is significantly more difficult because much of the syntax and semantic information provided by high-level programming languages is lost during the compilation process. For source code, one can take advantage of the syntax and semantic information to construct tree structures (e.g. abstract syntax tree). In contrast, a binary code has much less syntax and semantic information as they can be only viewed as a sequence of machine instructions or bytes.\n\n* The architecture in Fig.2\n- In Fig. 2, we used a simple RNN for demonstration purposes. However, we could use a Bidirectional RNN with GRU or LSTM cells. In our experiments, we employed a dynamic RNN with GRU cell.\n\n* Apply proposed method on other datasets or something maybe more specific to the task would be useful\n- Our proposed method is general enough for application to other popular problems like sentiment analysis. We proposed a specific embedding technique for binary SVD (see Sec. 3.1) where the format of machine instructions was considered.\n- In this work, we want to demonstrate the applicability of deep learning (or machine learning) to SVD — which is an important and complex problem in cybersecurity. With the dataset provision and preliminary experimental results, we hope to further encourage the application of deep learning (machine learning) to SVD and other cybersecurity applications.\n\n* Variational autoencoders are better models for the task\n- The two key aspects that contribute to the success of our proposed model are i) the capability to reconstruct sequential binary codes from their latent representations via the VAE formulation and ii) the ability to maximize the divergence of latent representations by “pushing away” the two learnable priors. The first aspect ensures that the latent representations can capture the crucial information in the original binary codes. We observe that a vulnerable binary and its fixed version only differ by a few machine instructions, hence the ability to be able to reconstruct is important to differentiate a vulnerable binary and its fixed version in the latent space because the model needs to pay attention to the slight difference in vulnerable and fixed binaries to successfully reconstruct them. In addition, by maximizing the divergence between two learnable priors, the latent representations of vulnerable and non-vulnerable binaries are encouraged to be maximally divergent for classification purpose.\n", "We are grateful for the reviewer’s constructive comments. \n\n* I’m not sure I actually buy into the basic premise that it makes sense to model vulnerable vs. non-vulnerable code as two different latent spaces. Aren’t the changes to make a vulnerable function safe again rather small and/or subtle? I think that beyond visualizing the convergence of properties of the latent spaces it would greatly improve this paper to inspect which aspects of the source\n- The two key aspects that contribute to the success of our proposed model are i) the capability to reconstruct sequential binary codes from their latent representations via the VAE formulation and ii) the ability to maximize the divergence of latent representations by “pushing away” the two learnable priors. The first aspect ensures that the latent representations are able to capture the crucial information in the original binary codes. We observe that a vulnerable binary and its fixed version only differ by a few machine instructions, hence the ability to reconstruct is important to differentiate a vulnerable binary and its fixed version in the latent space because the model needs to pay attention to the slight difference in vulnerable and fixed binaries to successfully reconstruct them. In addition, by maximizing the divergence between two learnable priors, the latent representations of vulnerable and non-vulnerable binaries are encouraged to be maximally divergent for classification purpose.\n- We have recently found a tool referenced in a paper (http://proceedings.mlr.press/v80/chen18j.html) that could be useful for inspecting which aspects of the source and binary contribute to both the latent representation and final classification as vulnerable vs. non-vulnerable.\n\n* I wish the process of “fix”ing the input code was better described. The authors should list how many vulnerable vs. non-vulnerable samples required fixing vs. could be compiled in their original form.\n- In the appendix section, we will add the technical steps, challenges, and details of the process to compile source to binary codes. We will also list the number of vulnerable and non-vulnerable source codes required fixing and those being able to be compiled using our tool in the paper.\n\n* The definition of \"vulnerable\" may be obvious to someone more familiar with the domain but seemed to me somewhat vague and never directly addressed”.\n- We indeed stated the definition of a software vulnerability in the first sentence of the paper. To further clarify this, we will add some typical examples of vulnerable source and binary codes in the appendix.\n", "This paper sets out to classify source code snippets are “vulnerable” or “not vulnerable” using sequential auto-encoders with two latent distributions (corresponding to the output classes), regularized to maximize divergence between theses two distributions (named Maximal Divergence Sequential Auto-Encoder). The authors created a compiled subset of the NDSS18 vulnerable vs. non-vulnerable software dataset (which is listed as one of their primary contributions). The dataset construction required non-trivial effort since example code snippets are often incomplete and the authors needed to “fix” these code examples in order to compile them. The fixed code examples are then compiled against both Windows and Linux and both the x86 and x86-64 architectures. The inputs to all predictive models are the opcode sequence of the compiled programs. \n\nThis paper compares against one previously published vulnerability detection method (VulDeePecker) which is a bidirectional RNN followed by a linear classifier. They also compare with a cascade of models with increasingly complex components:\n\n* RNN-R: A recurrent neural network trained in an unsupervised fashion (language modeling over opcode sequences), whose representations are then fed into an independent linear model. \n* RNN-C: End-to-end training of a recurrent model over opcodes, followed by a single dense layer.\n* Para2Vec: Encoding of the opcode sequence using the paragraph-to-vector architecture — I’m curious what they used as the paragraph boundaries in the compiled programs and whether the subsequent classifier was the same as RNN-C. \n* SeqVAE-C: Sequential variational auto encoder trained end-to-end with a final classification layer. \n* MDSAE-RKL: Maximal divergence sequential auto-encoder with KL divergence between the two class’s latent distributions, final classifier trained independently. \n* MDSAE-RWS: Maximal divergence sequential auto-encoder with L2/Wasserstein divergence between the two class’s latent distributions, final classifier trained independently. \n* MDSAE-CKL: Maximal divergence sequential auto-encoder with KL divergence between the two class’s latent distributions, final classifier included as the final layer of the whole model.\n* MDSAE-CWS: Maximal divergence sequential auto-encoder with L2/Wasserstein divergence between the two class’s latent distributions, final classifier included as the final layer of the whole model.\n\nThe two MDSAE models using Wasserstein divergence vastly outperform the two equivalent models using KL divergence. Another generalization that can be drawn from the evaluation is that models which are trained with supervision end-to-end outperform those which train representation and classifier separately. \n\nOverall, I think this is an interesting and cool paper but I’m not sure I actually buy into the basic premise that it makes sense to model vulnerable vs. non-vulnerable code as two different latent spaces. Aren’t the changes to make a vulnerable function safe again rather small and/or subtle? I think that beyond visualizing the convergence of properties of the latent spaces it would greatly improve this paper to inspect which aspects of the source contribute to both the latent representation and final classification as vulnerable vs. non-vulnerable. \n\nAlso, I wish the process of “fix”ing the input code was better described, since the failure of this procedure excluded 4k/13k of the programs/functions in their initial dataset and had the potential to introduces learnable biases in the source code. At the very least, the authors should list how many vulnerable vs. non-vulnerable samples required fixing vs. could be compiled in their original form. \n\nLastly, the definition of \"vulnerable\" may be obvious to someone more familiar with the domain but seemed to me somewhat vague and never directly addressed. \n\nTypo:\np3, need space in \"obtain32, 281\"", "This paper proposes a model to automatically extracted features for vulnerability detection using deep learning technique. \n\nPros:\n+ Create a labeled dataset for binary code vulnerability detection and attempts to solve the difficult but practical task of vulnerability detection.\n+ Expend VAE from single prior to multiple priors. \n+ Using figures and visualizations to show the behaviors of model.\n\nCons:\n- The operation that creates dataset may introduce bias or variance. (The developed tool that automatically detects the syntactical errors in a given piece of source code, fixes them, and finally compiles the fixed source code into binaries, may change the distribution of data.) Why not follow the way of producing dataset of malware detection or other tasks that using binary code.\n- It seems that the proposed model fails to consider the properties of the binary codes in this task. It would be more interesting if some design incorporates the special properties of the task.\n- The discussion in Figure 2 and equations are unclear. More explanations are needed. e.g. how to testing with label-unknown data.\n- Many typos are found. E.g., : the given given -> the given; k = 1,2 should be k = 0,1 \n", "The paper proposes a method to classify vulnerable and non-vulnerable binary codes where each data instance is a binary code corresponding to a sequence of machine instructions. The contributions include the creation of a new dataset for binary code vulnerability detection and the proposition of an architecture based on a supervised adaptation of variational auto-encoder, built upon the result of a sequential information, \nand using a regularization term to better discriminate positive from negative data. An experimental evaluation on the data proposed is presented, including several baselines, the results show the good behavior of the method.\n\nPros:\n-Presentation of new application of representation learning models\n-Construction of a new dataset to the community for binary software vulnerability detection\n-The proposed model shows a good performance\nCons:\n-The presentation of the dataset is for me rather limited while it is a significant contribution for the authors, it seems to be an extension of an existing dataset for source code vulnerability detection.\n-From the last remark, it is unclear for me if the dataset is representative of binary code vulnerability problem\n-The proposed architecture is reasonable and maybe new, but I find it natural with respect to existing work in the literature.\n\nComments:\n\n-If providing a new dataset is a key contribution, the authors should spend more time to present the dataset. What makes it interesting/novel/challenging must be clarified. \nThis dataset seems actually built from the existing NDSS18 dataset for source code vulnerability detection. If I understood correctly, the authors have compiled (and sometimes corrected) the source to create binaries, then they use the labels in NDSS18 to label the binary codes obtained. \nThis a good start and can be useful for the community.\nHowever the notion of vulnerability is not defined and it is difficult for me to evaluate the interest of the dataset.\nI am not an expert in the field, but I am not that convinced that vulnerability for binary codes is necessary related to vulnerability that can be detected from source codes.\nIndeed, one can think that some vulnerability may appear in binary codes that cannot be detected from source codes: e.g. use of unstable libraries, problems with specific CPU architectures, problems du to different interpretation of standard.\n\nThe current version of dataset seems to be a data where one tries to find the vulnerability that can be detected from code. It would be interesting here to know if detecting the vulnerabilities are easier from source code or from binary code.\n\nIt could be good if the authors could discuss more this point.\n\n-The architecture proposed by the authors seems to use a sequential model (RNN or other) as indicated in Fig.2, the authors should precise this point.\nThe architecture is general enough to work on other problems/tasks - which is good - but the authors focus on the binary vulnerability code dataset in the experiments.\n\nIf the authors think that their contribution is to propose a general method for sequence classification, it could be good to apply it on other datasets.\nOtherwise, something maybe more specific to the task would be useful.\nIn particular, there is no clear discussion to justify that variational autoencoders are better models for the task selected, it coud be good to argue more about it.\n\nThat being said, having non fixed priors and trying to maximize the divergence between positive and negative distributions are good ideas, but finally rather natural.\n\n" ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "BJga1_cxyN", "r1e4s7lT6Q", "iclr_2019_ByloIiCqYQ", "SkgU5i5m0Q", "ByxCw4yT67", "S1l--zJuTm", "Skl_mb8yTX", "SkevBv5dhm", "ryeuP2Ikam", "iclr_2019_ByloIiCqYQ", "iclr_2019_ByloIiCqYQ", "iclr_2019_ByloIiCqYQ" ]
iclr_2019_ByloJ20qtm
Neural Program Repair by Jointly Learning to Localize and Repair
Due to its potential to improve programmer productivity and software quality, automated program repair has been an active topic of research. Newer techniques harness neural networks to learn directly from examples of buggy programs and their fixes. In this work, we consider a recently identified class of bugs called variable-misuse bugs. The state-of-the-art solution for variable misuse enumerates potential fixes for all possible bug locations in a program, before selecting the best prediction. We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs. We present multi-headed pointer networks for this purpose, with one head each for localization and repair. The experimental results show that the joint model significantly outperforms an enumerative solution that uses a pointer based model for repair alone.
accepted-poster-papers
This paper provides an approach to jointly localize and repair VarMisuse bugs, where a wrong variable from the context has been used. The proposed work provides an end-to-end training pipeline for jointly localizing and repairing, as opposed to independent predictions in existing work. The reviewers felt that the manuscript was very well-written and clear, with fairly strong results on a number of datasets. The reviewers and AC note the following potential weaknesses: (1) reviewer 4 brings up related approaches from automated program repair (APR), that are much more general than the VarMisuse bugs, and the paper lacks citation and comparison to them, (2) the baselines that were compared against are fairly weak, and some recent approaches like DeepBugs and Sk_p are ignored, (3) the approach is trained and evaluated only on synthetic bugs, which look very different from the realistic ones, and (4) the contributions were found to be restricted in novelty, just uses a pointer-based LSTM for locating and fixing bugs. The authors provided detailed comments and a revision to address and clarify these concerns. They added an evaluation on realistic bugs, along with differences from DeepBugs and Sk_p, and differences between neural and automated program repair. They also added more detail comparisons, including separating the localization vs repair aspects by comparing against enumeration. During the discussion, the reviewers disagree on the "weakness" of the baseline, as reviewers 1 and 4 feel it is a reasonable baseline as it builds upon the Allamanis paper. They found, to different degrees, that the results on realistic bugs are much more convincing than the synthetic bug evaluation. Finally, all reviewers agree that the novelty of this work is limited. Although the reviewers disagree on the strength of the baselines (a recent paper) and the evaluation benchmarks, they agreed that the results are quite strong. The paper, however, addressed many of the concerns in the response/revision, and thus, the reviewers agree that it meets the bar for acceptance.
val
[ "Byg7TN9daQ", "rkxYwwjqTQ", "BJe25LDiRX", "B1ghy9f5Rm", "H1gNmih4Am", "SJg8Qn6gAQ", "r1eodmKxRQ", "BJx4h1YgCX", "H1ecT0_gR7", "H1gAbwMlAX", "HJgADrtPT7", "S1lIQfpZ6X", "SyeyTXFyTQ", "HkgAKQg93Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "This paper considers the problem of VarMisuse, a kind of software bug where a variable has been misused. Existing approaches to the problem create a complex model, followed by enumerating all possible variable replacements at all possible positions, in order to identify where the bug may exist. This can be problematic for training which is performed using synthetic replacements; enumeration on non-buggy positions does not reflect the test case. Also, at test time, enumerating is expensive, and does not accurately capture the various dependencies of the task. This paper instead proposes a LSTM based model with pointers to break the problem down into multiple steps: (1) is the program buggy, (2) where is the bug, and (3) what is the repair. They evaluate on two datasets, and achieve substantial gains over previous approaches, showing that the idea of localizing and repairing and effective.\n\nI am quite conflicted about this paper. Overall, the paper has been strengths:\n- It is quite well-written, and clear. They do a good job of describing the problems with earlier approaches, and how their approach can address it.\n- The proposed model is straightforward, and addresses the problem quite directly. There is elegance in its simplicity.\n- The evaluation is quite thorough, and the resulting gains are quite impressive.\n\nHowever, I have some significant reservations about the novelty and the technical content. The proposed model doesn't quite bring anything new to the table. It is a straightforward combination of LSTMs with pointers, and it's likely the benefits are coming from the reformulation of the problem, not from the actual proposed model. This, along with the fact that VarMisuse is a small subset of the kinds of bugs that can appear in software, makes me feel the ideas in this paper may not lead to significant impact on the research community.\n\nAs a minor aside, this paper addresses some specific aspects of VarMisuse task and the Allamanis et al 2018 model, and introduces a model just for it. I consider the Allamanis model a much more general representation of programs, and much more applicable to other kinds of debugging tasks (but yes, since they didn't demonstrate this either, I'm not penalizing this paper for it).\n\n--- Update ----\nGiven the author's response and the discussion, I'm going to raise the score a little. Although there are some valid concerns, it provides a clear improvement over Allamanis et al paper, and provides an interesting approach to the task.\n", "This paper presents an LSTM-based model for bug detection and repair of a particular type of bug called VarMisuse, which occurs at a point in a program where the wrong identifier is used. This problem is introduced in the Allamanis et al. paper. The authors of the paper under review demonstrate significant improvements compared to the Allamanis et al. approach on several datasets.\n\nI have concerns with respect to the evaluation, the relation of the paper compared to the state-of-the-art in automatic program repair (APR), and the problem definition with respect to live-variable analysis.\n\nMy largest concern about both this paper and the Allamanis et al. paper is how it compares to the state-of-the-art in APR in general. There is a large and growing amount of work in APR as shown in the following papers:\n[1] L. Gazzola, D. Micucci, and L. Mariani, “Automatic Software Repair: A Survey,” IEEE Transactions on Software Engineering, pp. 1–1, 2017.\n[2] M. Monperrus, “Automatic Software Repair: A Bibliography,” ACM Comput. Surv., vol. 51, no. 1, pp. 17:1–17:24, Jan. 2018.\n[3] M. Motwani, S. Sankaranarayanan, R. Just, and Y. Brun, “Do automated program repair techniques repair hard and important bugs?,” Empir Software Eng, pp. 1–47, Nov. 2017.\n\nAlthough the proposed LSTM-based approach for VarMisuse is interesting, it seems to be quite a small delta compared to the larger APR research space. Furthermore, the above papers on APR are not referenced.\n\nThe paper under review mostly uses synthetic bugs. However, they do have a dataset from an anonymous industrial setting that they claim is realistic. In such a setting, I would simply have to trust the blinded reviewers. However, the one industrial software project tells me little about the proposed approach’s effectiveness when applied to a significant number of widely-used software programs like the ones residing in state-of-the-art benchmarks for APR, of which there are at least the following two datasets:\n[4] C. L. Goues et al., “The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs,” IEEE Transactions on Software Engineering, vol. 41, no. 12, pp. 1236–1256, Dec. 2015.\n[5] R. Just, D. Jalali, and M. D. Ernst, “Defects4J: A Database of Existing Faults to Enable Controlled Testing Studies for Java Programs,” in Proceedings of the 2014 International Symposium on Software Testing and Analysis, New York, NY, USA, 2014, pp. 437–440.\n\nThe above datasets are not used or referenced by the paper under review.\n\nMy final concern about the paper is the formulation of live variables. A variable is live at certain program points (e.g., program statements, lines, or tokens as called in this paper). For example, from Figure 1 in the paper under review, at line 5 in (a) and (b), object_name and subject_name are live, not just sources. In the problem definition, the authors say that \"V_def^f \\subseteq V denotes the set of all live variables\", which does not account for the fact that different variables are alive (or dead) at different points of a program. The authors then say that, for the example in Figure 1, \"V_def^f contains all locations in the program where the tokens in V appear (i.e., tokens in the Blue boxes), as well as token sources from line 1”. The explanation of the problem definition when applied to the example does not account for the fact that different variables are alive at different program points. I’m not sure to what extent this error negatively affects the implementation of the proposed model. However, the error could be potentially quite problematic.", "We have finished evaluating the baseline enumerative model on the real dataset (Section 4.4). The best localization and localization+repair accuracies achieved by the enumerative approach are 6.1% and 4.5% respectively. The corresponding True positive and Classification Accuracy for the model are 41.7% and 47.2%. In contrast, our joint model achieves significantly higher accuracies (+15% for localization and +11% for localization+repair) as reported in Section 4.4 (also provided below for comparison).\n\n True Positive Classification Localization Localization+Repair\nJoint Model\t\t\t 67.3%\t\t 66.7%\t 21.9%\t 15.8%\nEnumerative Model 41.7% 47.2% 6.1% 4.5%\n\nWe will add this result to Section 4.4 as well.\n", "We thank the reviewer for useful suggestion to further update the paper with clarification on V_def^f set and industrial dataset; and we have updated the paper accordingly in the new revision.\n\nWe do intend to make our code available to others. We believe that the implementation details as described in the paper are also sufficient for reproducing our technique and results.\n\nPlease let us know in case you have any further concerns, and/or suggestions of how we can further improve the paper.\n", "I appreciate the authors adding the related work I noted as missing in the paper and distinguishing their work more from the state of the art. \n\nThe explanation of V_def^f in this response is more helpful than the one line explanation given in the revised paper. In particular, I find the following sentences useful: \"The V_def^f set contains all variables defined in a function f, including the function arguments; in this way constructing a set of all variables that can be used within the scope. We construct one V_def^f set per function, representing a set of candidate variables for fixing bugs in that function.\" I encourage the authors to clarify their definition of V_def^f further in the paper, as they did in this response.\n\nI think it is important for the authors to clarify that their industrial dataset includes multiple projects in the paper, not just in this response.\n\nAre the authors going to make an implementation of their approach available for others to build upon (e.g., base new approaches on) or improve upon? I believe such practices of releasing implementations help to enhance the science and accelerates the rate of that improvement.\n\n", "We thank the reviewer for the suggestion to compare the results of the joint model on the real dataset (Section 4.4) with the enumerative baseline. The repair-only model (underlying the enumerative baseline) from Section 4.1 is trained on the Py150 dataset. For a fair comparison, we have started training the repair-only model on the dataset used for training the joint model in Section 4.4. We will report the results (analogous to Table 1) as soon as the model training is finished in a few days. However, note that the issues with enumerative approaches are fundamental and independent of the choice of datasets. \n\nIn this paper, our aim is to develop new models that can better capture certain family of program repair tasks (such as VarMisuse) and improve upon previous enumerative approaches on datasets previously used in the literature.\n\nFurther, in order to evaluate how well the model performs on realistic scenarios, we created the dataset used in Section 4.4. Note that the dataset was created by capturing pairs of consecutive snapshots of functions from development histories that differ by a single variable occurrence. We then also include other functions in those files that weren’t changed as sources of correct functions. We did not intentionally set the percentage of buggy samples to 10%, but it was an artifact of the way we created our dataset by including non-buggy functions from the files that differed by one variable change in consecutive snapshots. The rationale behind this procedure was that it estimates when given files with VarMisuse bugs, in how many cases can the model learn to classify the faulty functions (from among all functions in the file) as faulty and further localize and repair the bugs in those faulty functions.", "Thanks to all the reviewers for their helpful and constructive feedback. We have uploaded a new paper revision to address the comments and feedback:\n\n1. Added a new section 4.4 on evaluation of the model in practice on realistic bugs.\n2. Added a discussion about key differences with the previous work: DeepBugs and Sk_p (Section 2).\n3. Added a discussion on performance comparison with enumerative approaches (Section 4.1).\n4. Changed the problem definition to clarify the definition of V_def set (Section 3.1).\n5. Added a discussion on the relationship between previous automated program repair (APR) work based on tests/specifications and neural program repair approaches (Section 2).", "Thank you for the thoughtful review and constructive feedback.\n\nWe will include a discussion about the differences between our work and the automated program repair (APR) techniques in the literature, as outlined below. The traditional APR approaches differ from our work in the following ways: 1) They require a form of specification of correctness to repair a buggy program, usually as a logical formula/assertion, a set of tests or a reference implementation. 2) They depend on hand-designed search techniques for localization and repair. 3) The techniques are applied to programs which violate the specifications (e.g., a program which fails some tests), that is, to programs which are already known to contain bugs. In contrast, a recent line of research in APR is based on end-to-end learning, of which ours is an instance. Our solution (like some other learning based repair solutions) has the following contrasting features: 1) Our solution does not require any specification of correctness. Instead it learns to fix a common class of errors directly from source code examples. 2) Our solution does not perform enumerative search for localization or repair. We train a neural network to perform localization and repair directly. 3) Our solution is capable of first classifying whether a program has the specific type of bug or not, and subsequently localizing and repairing it.\n\nManyBugs, IntroClass, and Defects4J are benchmarks designed for test-based program repair techniques. The bugs relate to the expected specification of individual programs (captured through test cases of the program) and the nature of bugs vary from program to program. These benchmarks are therefore suitable to evaluate repair techniques guided by test executions. Learning based solutions like ours focus on common error types so that it is possible for a model to generalize across programs, and work directly on embeddings of source code.\n\nThank you for your comment about the variable liveness. We misused the term of live variables, and we will update the paper accordingly. The V_def^f set contains all variables defined in a function f, including the function arguments; in this way constructing a set of all variables that can be used within the scope. We construct one V_def^f set per function, representing a set of candidate variables for fixing bugs in that function. In this way, the V_def^f set is a (safe) over-approximation of the in-scope variables at each program location. The over-approximation can lead to predicting an undefined variable as a repair, however, this is not an error and model over time learns not to predict undefined variables. The V_def^f set is not constrained to only live variables; as there are cases when solution to a bug is using a variable that is defined in the scope but not live (not used elsewhere), e.g., subject_name variable in Figure 1a. Regarding your comment about Figure 1, in the blue boxes we show variable usages, not V_def^f set.\n\nWe want to clarify that the examples in our industrial dataset (Section 4.4) are not from a single industrial project. The examples do come from multiple software projects.\n\nPlease let us know if this helped clarify the confusion regarding the problem definition of candidate variable set and the relationship with previous APR work and the more recent neural program repair approaches (ours, Allamanis et. al and others). ", "Thank you for the thoughtful review and constructive feedback. \n\nOur paper proposes a joint model for localization and repair using pointers, which is novel and the main technical contribution of the paper. Even though it is applied specifically to the variable misuse problem, the idea of using pointers is fundamental and portable to other program repair problems. In particular, all program repair techniques require the bug localization step and pointers seem like an ideal mechanism for this as they can pinpoint a buggy location precisely at the token-level. Other previous works in the program repair literature either use enumerative search for localization, or perform localization at the granularity of lines or depend on external tools for localization (such as compiler error messages for syntactic error localization). In contrast, our proposal to use pointers enables an end-to-end learning based solution. We use the pointer mechanism on top of sequence based encoding of programs, but pointers can be combined naturally with other representations of programs; e.g., trees or graphs.\n\nAs we demonstrate in the paper, the previous enumerative approaches assume independence among different predictions that is problematic and leads to poor results. The end-to-end joint localization and repair is an essential step to overcome this issue, and we believe this idea of joint prediction is going to generalize to many other program repair tasks and even program completion tasks.", "It seems that the real dataset has a different distributions than the synthetic bugs dataset based on Py150. Did you observe similar improvements over the baselines on it?\n\nIn general, all current bug-finding research suffers from having no sense of \"recall\" of whether it discovers only very few bugs or it discovers most of the bugs from a certain class. Since the paper aims to look into such issues, this would be good to say what is happening on real data and also why 10% of the samples were chosen as being buggy, while the frequency of the bug is likely much lower. And localization accuracy is in the 21.9% range - still low, do the anomaly-based baseline techniques get to similar numbers?", "Thank you for more comments and helpful suggestions.\n\nRevised version of the paper is added. We incorporated some of the above discussion on comparison with DeepBugs (in section 2 - related work) and a discussion on the performance comparison (in section 4.1). We also added a discussion on differences with the Sk_p paper (in section 2), and the results of our model in realistic scenarios (section 4.4), which we also explain below.\n\nThanks for the suggestion on evaluating the model on realistic scenarios. We have been collecting such a dataset for evaluating the model. In particular, we examined development histories in a software company (name elided for anonymity) to extract pairs of consecutive snapshots of code (on a level of functions) which differ by a single variable occurrence. These are indicative of variable misuses; several of which were explicitly pointed out as bugs by code reviewers during the manual code review process. For each snapshot pair (x,y), where x is a function before change and y is the same function after change, we collected all functions from the same file in which function x was present. We expect our model to classify all functions other than x as bug-free. For the function x, we want the model to classify it as buggy, and moreover, localize and repair the bug where the repair is the difference between y and x. In all, we collected 4592 such snapshot pairs. From these, we generated a test dataset of 41672 non-buggy examples and 4592 buggy examples. We trained the pointer model on a training dataset from which we exclude the 4592 files containing the buggy snapshots. When applied on the test dataset, the model achieved true positive rate of 67.3%, classification accuracy of 66.7%, localization accuracy of 21.9% and localization+repair accuracy of 15.8%. In all, on this dataset, our model could localize and repair 727 variable misuse instances. These are promising results on data collected from real developer histories. We have also added a subsection 4.4 to describe the evaluation and the results.\n\nRelationship with Sk_p and applying the model on their dataset:\nWe would like to point out that Sk_p does not perform direct localization, instead it performs an enumerative search (for potential bug locations) which we eschew. Given a program, it considers each statement individually. For each program statement s_i, it considers the previous program statement s_{i-1} and the following statement s_{i+1} as inputs to an encoder while the decoder generates the full statement s’_i that should be present in between the two statements. It performs this prediction for each statement individually and then reports discrepancies as repairs. Note that it can only produce full statements as repairs unlike our approach for predicting a single variable usage repair. Moreover, similar to the DeepBugs approach, it would be difficult for the model to predict repairs that include variables present at two or more lines above/below the buggy variable location. \n\nTheir dataset is unfortunately not useful for our evaluation for the following reasons:\n1. Our model is designed for the variable misuse problem, which typically occurs when programmers copy-paste code fragments and forget to change certain variables. The sk_p dataset is coming from small student programs (5-10 LOC) submitted for MOOC exercises, which will likely not have many variable misuse bugs.\n2. The Sk_p model (similar to the SynFix paper) exploits the fact that many students are solving the same programming problem and likely will write similar solutions which can then be used to train the models. In our case, we are generalizing from programs written by developers for different tasks.", "Thank you for the response. Certainly, many of the issues discussed can be incorporated in the paper, not in comments. The task the discussed papers introduce is used in practice to do anomaly detection and then to report bugs. The bug localization of Allamanis et al has very high positive rate (they did not share it in their paper, but text implies it is a few bugs per hundreds of reports). Pradel and Sen however share accuracy on a non-synthetic task and it is around 50%. It does not yet look like that the result of this submission will translate to any better bug-finding technique in practice, but I am looking forward to see if the proposed technique is a good idea on a more realistic scenario.\n\nAlso, this is not the first paper to propose both localization and fixing. The following work does it and their accuracy is lower, but on a more practical task:\nPu, Yewen, Karthik Narasimhan, Armando Solar-Lezama, and Regina Barzilay. “Sk_p: a Neural Program Corrector for MOOCs.”, OOPSLA 2016\n\nOne possibility to improve the submission is to try the neural approach on their dataset and report state-of-the-art results.\n", "Thanks for the review and constructive feedback. We believe there are a few major misunderstandings in the review and we would like to take this opportunity to clarify them. We will be happy to discuss them in more detail if more clarifications might be needed or there are more questions.\n\nWe would first like to point out that ours is the first model that jointly learns to perform both localization and repair of the variable misuse bugs. It exploits the property of this particular class of variable misuse bugs -- both the location and repair corresponds to variable use locations in the program. Unlike Allamanis et al. 2018 that uses an enumerative approach to make a number of predictions for a program that is linear in number of variable uses, our model makes a single prediction using a two pointer based mechanism. \n\nThanks for the pointer to the DeepBugs paper. Note that there are several differences of our work with the DeepBugs paper, which we explain below. We will add this to our revision as well.\n1. DeepBugs learns a classifier over single expressions. It takes a single program expression as input (e.g. “2% i == 0”) and classifies it as positive or negative. On the other hand, in addition to classifying programs, our model learns to localize and also repair the bug using a two-headed pointer network.\n2. Our model uses the full program (up to 250 number of tokens) for learning the vector representation. DeepBugs only looks at a single expression at a time.\n3. Finally, the 80% accuracy number for DeepBugs is only for expression classification. It has no direct comparison with our model’s accuracy since it is a different problem (classifying a single expression as correct compared to analyzing a full program to identify bug location and the corresponding repair). Moreover, our pointer models also get to 82.4% classification accuracy for full programs (Table 1).\n\nAllamanis et al. only report the accuracy of repair-only model, where the model predicts a single variable at a time for each slot location in a program. Translating their 85.5% repair accuracy number to a number that corresponds to repairing the full program would lead to a very different result. In Table 1, we try to replicate a similar experiment and show that jointly learning the model leads to significant improvements without sacrificing true positive and classification accuracy. Moreover, Allamanis et al. 2018 perform a significant amount of program preprocessing including type inference, control flow, and data flow analysis to add different types of graph edges. Without such pre-processing, they achieve an accuracy of 55.3% on repair-only tasks (Section 4.3). In our work, we want our distributed representations to automatically learn good representations of programs without any manual feature engineering.\n\nPerformance trade-off: In fact, our proposed architecture is significantly more scalable and easier to train. Since we are using sequence models to compute pointer attentions that are easier to batch over multiple examples, it is much more scalable to train compared to graph models that are difficult to batch because of different graph sizes. Our own graph implementation was significantly slower to train. \n\nIn addition to that, it is also significantly faster at inference time, as it does not need to perform an O(n) number of model predictions, where n is the number of variable use locations in the program under test. For our model, it performs a single prediction, which is much faster.\n\nPlease let us know if this helped clarify the questions and comments.", "Several recent works propose to discover bugs in code by creating dataset of presumably correct code and then to augment the data by introducing a bug and creating a classifier that would discriminate between the buggy and the correct version. Then, this classifier would be used to predict at each location in a program if a bug is present.\n\nThis paper hypothetizes that when running on buggy code (to discover the bug) would lead to such classifier misbehave and report spurious bugs at many other locations besides the correct one and would fail at precisely localizing the bug. Then, they propose a solution that essentially create a different classifier that is trained to localize the bug.\n\nUnfortunatley this leads to a number of weaknesses:\n - The implementation and evaluation are only on a quite syntactic system with low precision and that needs to sift through a huge amount of weak and irrelevant signals to make predictions.\n - The gap here is huge: the proposed system is only based on program syntax and gets 62.3% accuracy, but state-of-the-art has 85.5% (there is actually another recent technique [1] also with accuracy in the >80% range)\n - It is not clear that the entire discussed problem is orthogonal to the selection of such weak baselines to build the improvements on.\n - Trade-offs are not clear: is the proposed architecture slower to train and query than the baselines?\n\nStrengths of the paper are:\n - Well-written and easy to follow and understand.\n - Evaluation on several datasets.\n - Interesting architecture for bug-localization if the idea really works.\n\n[1] Michael Pradel, Koushik Sen. DeepBugs: a learning approach to name-based bug detection" ]
[ 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2019_ByloJ20qtm", "iclr_2019_ByloJ20qtm", "SJg8Qn6gAQ", "H1gNmih4Am", "BJx4h1YgCX", "H1gAbwMlAX", "iclr_2019_ByloJ20qtm", "rkxYwwjqTQ", "Byg7TN9daQ", "HJgADrtPT7", "S1lIQfpZ6X", "SyeyTXFyTQ", "HkgAKQg93Q", "iclr_2019_ByloJ20qtm" ]
iclr_2019_Byx83s09Km
Information-Directed Exploration for Deep Reinforcement Learning
Efficient exploration remains a major challenge for reinforcement learning. One reason is that the variability of the returns often depends on the current state and action, and is therefore heteroscedastic. Classical exploration strategies such as upper confidence bound algorithms and Thompson sampling fail to appropriately account for heteroscedasticity, even in the bandit setting. Motivated by recent findings that address this issue in bandits, we propose to use Information-Directed Sampling (IDS) for exploration in reinforcement learning. As our main contribution, we build on recent advances in distributional reinforcement learning and propose a novel, tractable approximation of IDS for deep Q-learning. The resulting exploration strategy explicitly accounts for both parametric uncertainty and heteroscedastic observation noise. We evaluate our method on Atari games and demonstrate a significant improvement over alternative approaches.
accepted-poster-papers
The paper introduces a method for using information directed sampling, by taking advantage of recent advances in computing parametric uncertainty and variance estimates for returns. These estimates are used to estimate the information gain, based on a formula from (Kirschner & Krause, 2018) for the bandit setting. This paper takes these ideas and puts them together in a reasonably easy-to-use and understandable way for the reinforcement learning setting, which is both nontrivial and useful. The work then demonstrates some successes in Atari. Though it is of course laudable that the paper runs on 57 Atari games, it would make the paper even stronger if a simpler setting (some toy domain) was investigated to more systematically understand this approach and some choices in the approach.
val
[ "SJxIO823n7", "B1xRtDI937", "BkxudvgP27", "B1lbK9KWCX", "rylLQ5Y-A7", "rkxMFKYbAQ", "SJg6C-K8jm", "rylspvI-9m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Combining the parametric uncertainty of bootstrapped DQN with the return uncertainty of C51, the authors propose a deep RL algorithm that can explore in the presence of heteroscedasticity. The motivation is quite well written, going through IDS and the approximations in a way that didn't presume prior familiarity.\n\nThe core idea seems quite sound, but the fact that the distributional loss can't be propagated through the full network is troubling. The authors' choice of bootstrapped DQN feels arbitrary, as a different source of parametric uncertainty might be more compatible (e.g. noisy nets), and this possibility isn't discussed.\n\nThe computational limitations are understandable, but the authors should be more transparent about how the subset of games were selected. A toy example would have actually added quite a bit, as it would nice to see that the extent to which this algorithm helps is proportional to the heteroscedasticity in the environment. The advantage of DQN-IDS over bootstrapped suggests that something other than just the sensitivity to return variance is causing these improvements.\n\nIdeally, results with and without the heuristically chosen lower bound (rho) would be presented, as its unclear how much this is needed and its presence loosens the connection to IDS.\n\nThis is a small point, but the treatment of intrinsic motivation (i.e. changing the reward function) for exploration seems overly harsh. Most of these methods are amenable to experience replay, which would propagate the exploration signals and allow for \"deep\" exploration. The fact that they often change the optimal policy should be enough motivation to not discuss them further.\n\nEDIT: I think dealing with the lower bound and including plots for all 55 games pushed this over the edge. It would've been nice if there non-zero scores on Montezuma's Revenge, but I know that is a high bar for a general purpose exploration method. In general I think this approach shows great promise going forward score 6-->7", "The authors propose a way of extending Information-Directed Sampling (IDS) to reinforcement learning. The proposed approach uses Bootstrapped DQN to estimate parametric uncertainty in Q-values, and distributional RL to estimate intrinsic uncertainty in the return. The two types of uncertainty are combined to obtain a simple exploration strategy based on IDS. The approach outperforms a number of strong baselines on a subset of 12 Atari 2600 games.\n\nClarity - I found the paper to be very well-written and easy to follow. Both the background material and the experimental setup were explained very clearly. The main ideas were also motivated quite well. It would have been nice to include a bit more discussion of why IDS is a good strategy, i.e. what are the theoretical guarantees in the bandit case? Section 3.2 could also provide a more intuitive argument.\n\nNovelty - The paper essentially combines the IDS formulation of Kirschner & Krause, Bootstrapped DQN of Osband et al., and the C51 distributional RL method of Bellemare et al. Most of the novelty is in how to combine these ideas effectively in the deep RL setting, which I found sufficient.\n\nSignificance - Improving over existing exploration strategies for deep RL would be a significant achievement. While the results are impressive, I have a few concerns regarding some of the claims.\n\nThe subset of games used to evaluate the proposed approach seems to be biased towards games where there is either a dense reward or exploration is known to be easy. Almost every deep RL paper on exploration includes results for at least some of the hard exploration games (see “Unifying Count-Based Exploration and Intrinsic Motivation”). Why were these games excluded from the evaluation? The results would be much stronger if results on all 57 games were included.\n\nThe main difference between DQN-IDS and C51-IDS is that C51-IDS will tend to favor actions with lower return uncertainty. Doesn’t this mean that the improved performance of C51-IDS is due to an improved ability to exploit rather than explore? If this is indeed the case, then I would expect more evidence that this doesn't come at a cost of reduced performance on tasks where exploration is difficult.\n\nFinally, the comparison between Bootstrapped DQN and DQN-IDS conflates the exploration strategies (IDS vs Thompson sampling) with the choice of optimizer (Adam vs RMSProp), so the claim that simply changing the exploration strategy to IDS leads to a major improvement is not valid. It would be interesting to see results for Bootstrapped DQN using the authors’ implementation and choice of optimizer to fully separate the effect of the exploration strategy.\n\nOverall quality - This is an interesting paper with some promising results. I’m not convinced that the proposed method leads to better exploration, but I think it still makes a valuable contribution to the work on balancing exploration and exploitation in RL.\n\n-------\n\nThe rebuttal and revisions addressed some of my concerns so I am increasing my score to 7 ", "This paper investigates sophistical exploration approaches for reinforcement learning. Motivated by the fact that most of bandit algorithms do not handle heteroscedasticity of noise, the authors built on Information Direct Sampling and on Distributional Reinforcement Learning to propose a new exploration algorithm family. Two versions of the exploration strategy are evaluated against the state-of-the-art on Atari games: DQN-IDS for homoscedatic noise and C51-IDS for heteroscedastic noise. \n\nThe paper is well-written. The background section provides the clues to understand the approach. In IDS, the selected action is the one that minimizes the ratio between a squared conservative estimate of the regret and the information gain. Following (Ktischner and Krause 2018), the authors propose to use \\log(1+\\sigma^2_t(a)/\\rho^2(a)) as the information gain function, which corresponds to a Gaussian prior, where \\sigma^2_t is the variance of the parametric estimate of E[R(a)] and \\rho^2(a) is the variance of R(a). \\sigma^2_t is evaluated by bootstrap (Boostrapped DQN). Where the paper becomes very interesting is that recent works on distributional RL allow to evaluate \\rho^2(a). This is the main input of this paper: combining two recent approaches for handling heteroscedasticity of noise in Reinforcement Learning.\n\nMajor concern:\nWhile the approach is appealing for handling heteroscedastic noise, the use of a normalized variance (eq 9) and a lower bound of variance (page 7) reveal that the approach needs some tuning which is not theoretically founded. \nThis is problematic since in reinforcement learning, the environment is usually assumed to be unknown. What are the results when the lower bound of the variance is not used? When the variance of Z(a) is low, the variance of the parametric estimate should be low also. It is not the case?\n\n\nMinor concerns:\n\nThe color codes of Figure 1 are unclear. The color of curves in subfigures (b) (c) (d) corresponds to the color code of IDS.\n\nThe way in which \\rho^2(s,a) is computed in algorithm 1 is not precisely described. In particular page 6, the equation \\rho^2(s,a)=Var(Z_k(s,a)) raises some questions: Is \\rho evaluated for a particular bootstrap k or is \\rho is averaged over the K bootstraps ?\n_____________________________________________________________________________________________________________________________________________\n\nI read the answers of authors. I increased my rating.\n", "Thank you for the review and the comments.\n\nPlease note that in the meantime, we were able to run experiments on 55 of the Atari games. The new results support our initial findings and are included in updated version of the paper.\n\nIt is correct that in the bandit setting, the information-gain function with the unnormalized noise function (rho(s,a)) leads to the correct scaling of the regret-information ratio, such that the regret of IDS can be bounded. However it is not clear that this is necessarily the right choice when used in combination with deep reinforcement learning. In fact, the scaling of the reward differs significantly from game to game, which leads to different noise levels and values for the information gain function \\log(1+\\sigma^2_t(s, a)/\\rho^2(s, a)). We found that the normalized noise estimation yields better results and allows the agent to account for numerical differences across environments while favoring the same amount of risk across different games. Importantly, it preserves the signal needed for noise-sensitive exploration\nand does not introduce a new tuning parameter. It also does not necessarily loosen the connection to IDS, which explicitly allows to design policies by using different information-gain functions.\n\nThe lower bound on the return variance was introduced only for numerical reasons. Further, it prevents the agent from overcommitting to low-variance actions. Even in the bandit case, the strategy degenerates as the noise variance of a single action goes to zero (because that way, the information gain of any action can be made arbitrarily large). Also, since the return variance is normalized, the values of return variance of different actions are relatively close to 1. Hence, a lower bound of 0.25 would not introduce significant difference. We note that we did not tune at all this value and selected it heuristically. We also conducted experiments without the lower bound on rho. While the per-game scores may slightly differ, the overall change in mean human-normalized score was only 23%. This is added to the revised version of the paper.\n\nTo clarify the way in which \\rho(s, a)^2 is computed in Algorithm 1: The bootstrap heads are used only to compute the predictive parametric uncertainty \\sigma(s,a)^2. The return uncertainty \\rho(s,a)^2 is computed based only on the output Z(s,a) of the distributional head. We have added the exact formula for Var(Z(s,a)) at the end of page 6 in the paper.\n\nCan you also please clarify your note about the color codes in Figure 1?\n", "Thank you for the review and the comments.\n\nWe first like to report, that in the meantime we were able to run our experiments on 55 Atari games simulated via the OpenAI gym interface. The result table is updated in the revised version of our paper and supports our initial findings: The homoscedastic DQN-IDS achieves a score of 757;187 (%mean; %median), and the heteroscedastic C51-IDS achieves 1058;253 which is competitive with IQN (1048; 218).\n\nRegarding the concern that the gain of C51-IDS is due to more exploitative actions: It is true that the main difference between DQN-IDS and C51-IDS is that C51-IDS tends to favor actions with lower return uncertainty (risk). However, the improved performance is unlikely to be due to more extensive exploitation. First of all, the results in Table 1, 3 and 4 are based on evaluation scores. These evaluation scores are obtained by running the agents with an evaluation policy which is computed in the same way for both DQN-IDS and C51-IDS and acts greedily w.r.t. the mean of all bootstrapped heads (Eq. 10). If C51-IDS was only focusing at exploitation during training (i.e. the data-collection process, while the IDS policy is being run), it would not be able to explore sufficiently and would likely converge to a suboptimal policy. Hence we would observe worse evaluation scores compared to DQN-IDS, which is not the case demonstrated by the overall results. Furthermore, even though actions with lower return uncertainty have higher information gain (as computed by C51-IDS), this does not necessarily lead to exploitation, as the choice additionally depends on the amount of parametric uncertainty as well as the ratio between regret and information gain (see also the Gaussian process example in Fig. 1). Additionally, it is not necessarily true that an action with a lower return uncertainty would be the greedy one.\n\nIn terms of the comparison between Bootstrapped DQN and DQN-IDS, we previously ran some experiments on Bootstrapped DQN using the Adam optimizer and observed very little difference compared to RMSProp. We agree that a fair comparison would require running Bootstrapped DQN with the Adam Optimizer. We have corrected our claim in the paper. However, since this is not the focus of our paper and given the available computational resources, we will be unable to include Bootstrapped DQN results with the Adam optimizer over all 57 Atari games. We will also release the code after the final decision, which includes our implementation of Bootstrapped DQN.\n", "Thank you for the review and the suggestions.\n\nWe first like to report, that in the meantime we were able to run our experiments on 55 Atari games simulated via the OpenAI gym interface. The result table is updated in the revised version of our paper and supports our initial findings: The homoscedastic DQN-IDS achieves a score of 757;187 (%mean; %median), and the heteroscedastic C51-IDS achieves 1058;253 which is competitive with IQN (1048; 218).\n\nTo clarify the concern raised on propagating the distributional loss: We emphasize that we chose not to propagate the distributional loss into the full C51-IDS network and use the C51 distribution only for control. This allows us to isolate the effect of noise-sensitive exploration and gives a fair comparison between DQN-IDS and C51-IDS. This is not a limitation of our approach and we would expect an additional performance gain by propagating distributional gradients computed on a distributional loss like C51 or QR-DQN. This remark has been added to the paper.\n\nWe also conducted experiments without the lower bound on rho. While the per-game scores may slightly differ, the overall change in mean human-normalized score was only 23%. This as well is mentioned in the revised version of the paper.\n\nIn terms of the choice of parametric uncertainty estimator, we selected Bootstrapped DQN since it allows computing the predictive distribution variance, without the need for any sampling. We also briefly experimented with Neural Bayesian Linear Regression (Snoek et al, 2015), but we found Bootstrapped DQN to yield better results. However, as discussed in the related work section, we acknowledge there are other ways of estimating parametric uncertainty, such as noisy nets, Monte Carlo methods, Bayesian Dropout, etc.\n\nThe comparison to intrinsic motivation is re-phrased in the updated version of the paper.\n", "Thank you for the comment. The currently reported range of games was chosen in the following way. We first selected 3 games on which convergence was relatively quick (BeamRider, RoadRunner, Enduro) so that we can more easily tune our algorithm. The rest of the games were chosen as a combination of games on which Bootstrapped DQN and C51 achieve improvement over the baseline. In particular, we wanted to evaluate the homoscedastic version of our algorithm (DQN-IDS) against the best scores that Bootstrapped DQN achieves. Additionally, high C51 scores indicate that C51 achieves good estimate of the return distribution and we wanted to test whether our algorithm (C51-IDS) can benefit from this and improve over C51. Note that the selection also includes games on which C51 achieves poor results.\n\nWe are currently in the process of evaluating our method on more games, and we expect to get further results until the rebuttal period. The scores will be included in the revised version of the paper.\n", "I like the idea of this paper of extending Information-Directed Sampling to large state spaces. I also appreciate the computational constraints, and why the authors decided to test on only 12 Atari environments, but I was a bit perplexed by the choice of environments. Shouldn't the games that are known particularly to be hard to explore, such as Montezuma's Revenge, Pitfall and PrivateEye been evaluated? The games that the paper tested on not actually hard exploration problems (except Frostbite arguably). " ]
[ 7, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2019_Byx83s09Km", "iclr_2019_Byx83s09Km", "iclr_2019_Byx83s09Km", "BkxudvgP27", "B1xRtDI937", "SJxIO823n7", "rylspvI-9m", "iclr_2019_Byx83s09Km" ]
iclr_2019_ByxBFsRqYm
Attention, Learn to Solve Routing Problems!
The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development. However, to push this idea towards practical implementation, we need better models and better ways of training. We contribute in both directions: we propose a model based on attention layers with benefits over the Pointer Network and we show how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which we find is more efficient than using a value function. We significantly improve over recent learned heuristics for the Travelling Salesman Problem (TSP), getting close to optimal results for problems up to 100 nodes. With the same hyperparameters, we learn strong heuristics for two variants of the Vehicle Routing Problem (VRP), the Orienteering Problem (OP) and (a stochastic variant of) the Prize Collecting TSP (PCTSP), outperforming a wide range of baselines and getting results close to highly optimized and specialized algorithms.
accepted-poster-papers
The paper presents a new deep learning approach for combinatorial optimization problems based on the Transformer architecture. The paper is well written and several experiments are provided. A reviewer asked for more intuition to the proposed approach and authors have responded accordingly. Reviewers are also concerned with scalability and theoretical basis. Overall, all reviewers were positives in their scores, and I recommend accepting the paper.
train
[ "rkl7nk8QkN", "Bke7cL5jnX", "rylWKa1hCX", "HkgKKfy56m", "Bye8WGJcaX", "SkePlW15aQ", "rJeXVEsdnm", "BkeH3bYNn7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Wonderful - I have updated my score accordingly.", "This paper proposes an alternative deep learning model for use in combinatorial optimization. The attention model is inspired by the Transformer architecture of Vaswani et al. (2017). Given a distribution over problem instances (e.g. TSP), the REINFORCE update is used to train the attention model. Interestingly, the baseline used in the REINFORCE update is based on greedy rollout using the current model. Experimentally, four different routing problems are considered. The authors show that the proposed method often outperforms some other learning-based methods and is competitive with existing (non-learned) heuristics.\n\nOverall, this is a good piece of work. Next, I will touch on some strengths and weaknesses which I hope the authors can address/take into account. My main concern is the lack of comparison with Deudon et al. (2018).\n\nStrengths:\n- Writing: beautifully written and precise even with respect to tiny technical details; great job!\n\n- Versatility: the experimental evaluation on four different routing problems with different kinds of objectives and constraints, different baseline heuristics, etc., is quite impressive (irrespective of the results). The fact that the proposed model can be easily adapted to different problems is encouraging, since many real-world operational problems may be different from textbook TSP/VRP, and hard to design algorithms for; a learned algorithm can greatly expedite the process. This versatility is shared with the model in Dai et al. (2017) which applied to different graph optimization problems.\n\n- Choice of baseline: the use of the greedy policy is definitely the right thing to do here, as one wants to beat \"simpler\" baselines.\n\n- Results: the proposed method performs very well and does not seem hard to tune, in that the same model hyperparameters work well across different problems.\n\nWeaknesses:\n- Comparison to Deudon et al. (2018): I believe the authors should do more work to compare against Deudon et al. (2018). This includes expanding the sentence in related work, describing the differences in more detail; perhaps a side-by-side graphical comparison of the two models in the appendix would help; reporting results from or running the code of that paper for the relevant problems (TSP?). This is crucial, since that paper also builds on the Transformer architecture, attention, etc. Its code is also online and seems to have been up for a while (https://github.com/MichelDeudon/encode-attend-navigate). There is quite some overlap, and the reader should be able to understand how the two models/papers differ.\n\n- Intuition: One thing that is lacking here is some intuitive explanation of *why* this particular attention model is a reasonable choice for guiding a combinatorial algorithm. For instance, earlier work such as Pointer networks or S2V-DQN each addressed certain issues with other models of the time (e.g. capturing graph structure in S2V-DQN). If the choice of the model is purely performance-driven, that is completely fine, but then it makes sense to walk the reader through the process that got you to the final model. You do some of that in the ablation study in 5.2, for the baseline. Additionally, I am wondering about why this attention model is good for a combinatorial problem.\n\nQuestions/suggestions:\n- Performance metric: if I understand correctly, Table 1 reports objective values. Could you additionally report optimality gaps compared to the best solution found *across* methods (including Gurobi, especially when it solves to optimality for the smaller problems/all of TSP)? Otherwise, it is very hard to interpret the differences in absolute objective values across methods.\n\n- Baseline: could you use a non-learned baseline (e.g. 2-opt for the case of TSP) at the beginning of the training (then go to your learned but greedy baseline)? Might this give a stronger baseline at the beginning and accelerate training?", "I appreciate the updates to the paper. My current assessment stands.", "Thank you for appreciating our paper!\n\n- Please let us motivate why we reuse a large part of the Transformer architecture. We think one of the successes of Deep Learning is the reduction of the need for manual feature engineering. We would like to avoid replacing feature engineering by task specific model engineering. Since the encoder has the generic task of learning node representations, we borrow the powerful Transformer architecture (interpreting it as an instance of a graph neural network). The decoder however is different and suitably adapted to the problem at hand.\n- Thank you for the detailed suggestions: we updated the Dai et al. reference and expanded on the AlphaGo comparison in the updated paper.", "Thank you for seeing the importance of the problem and the value of showing the broad applicability. Please let us address you concerns.\n\n- Scalability\nScalability is indeed a very important direction for further research. We think that the way that heuristics (like you mention) scale almost linearly is by considering the problem locally, e.g. by local search or by limiting the set of edges for nodes (e.g. consider a sparse graph). A very promising approach would be to combine these ideas with learning, e.g. by learning how to perform local search (rather than a construction as we do here) on a sparse graph. We think our work is a step in this direction by using an architecture that could be extended to operate (potentially locally) on a (sparse) graph structure, and a powerful algorithm to train with the rollout baseline.\n\n- Insufficient comparisons\na.\nYou are right that Gurobi may spend significant time to prove optimality after finding the solution. However, we are not sure if reporting the time the solution is found as if it were the run time is the right thing to do: the algorithm cannot stop (without sacrificing performance) at this time since it has no way to know that the current solution is optimal. By the same argument, we could also report the time a solution was first found (or sampled) in a heuristic search procedure, but this is a measure 'in hindsight' which does not constitute a practical algorithm.\nNevertheless, it's a good point that Gurobi may find good solutions early, which we can use 'heuristically' by setting a time limit or increasing the MIP gap (stop when the solution is proven within x % of optimal). We found that increasing the MIP gap (to as much as 5%) for TSP reduced running time at most 20%. A time limit of 1s makes no difference for TSP20/50 but results in no feasible solution being found in some cases for TSP100. A larger timelimit has no effect. For the OP and the PCTSP, however, we can tradeoff time for performance, but with limited success for larger instances (we added results for 1s, 10s and 30s time limit to the paper).\nb.\nWe thought Concorde/LKH would not add much as Gurobi already finds optimal solutions very quickly. However, following your suggestions we ran the experiments and added the results, being that Concorde is slower for smaller instances but 6x faster for TSP100. LKH empirically finds optimal results but takes slightly longer than Gurobi.\n\n- What does attention buy\nThank you for this suggestion (which was also noted by R1), this is indeed something that was missing which we have added to the discussion section of the paper.", "Thank you for reviewing and appreciating our paper! Please let us address your concerns; we have updated the paper according to your suggestions.\n\n- Comparison to Deudon et al. \nWe would like to emphasize that Deudon et al. (2018) is *concurrent* work that actually appeared *after* we released an early version (as online preprint) of this paper. However, we agree that to the reader the comparison is relevant and we have updated the paper to include an explanation of the differences. Also, since results in their paper are not directly comparable, we ran their code with the same number of iterations and samples as we do (this improved the results). We have added the numbers in Table 1. \n\n- Intuition\nThank you for this helpful suggestion (which was also mentioned by R3). Focusing on the technical parts of the paper and the results this is indeed something that we overlooked. We have added this to the discussion section.\n\n- Results as percentages\nNot reporting the percentages was merely a practical matter to keep things clear and save space. We have added the percentages in the updated paper and allocated more space for the table to keep things as clear as possible. \n\n- Baseline\nIndeed, using a known algorithm as baseline is a good suggestion. This is an interesting direction for future work.", "The paper presents an attention-based approach to learning a policy for solving TSP and other routing-type combinatorial optimization problems. An encoder network computes an embedding vector for each node in the input problem instance (e.g., a city in a TSP map), as well as a global embedding for the problem instance. The encoder architecture incorporates multi-head attention layers to compute the embeddings. The decoder network then uses those embeddings to output a permutation of the nodes which is used as the solution to the optimization problem. The encoder and decoder are trained using REINFORCE to maximize solution quality. Results are shown for four problem types -- TSP, vehicle routing, orienteering problem, and stochastic prize collecting TSP. \n\nPositive aspects of the paper: The problem of learning combinatorial optimization algorithms is definitely an important one as it promises the possibility of automatically generating special purpose optimizers. Showing experimental results for different problem types is useful as it gives evidence for broad applicability. The paper is well-written, the related work section is nice, and the background material is explained well enough to make it a self-sufficient read. \n\nI have two main criticisms:\n1. Scalability of the approach: Focusing on the TSP experiments, the problem sizes of 20, 50, and 100 are really trivial for a state-of-the-art exact solver like Concorde or heuristic algorithm like LKH. And there have already been many papers showing that RL can be used for small-scale TSP and other problems (many are cited in this paper). At this point the interesting question is whether an RL approach can scale to much bigger problem instances, both in terms of solution quality as well as inference running time. For example, the DIMACS TSP Challenge problem instances have sizes up to 10^7 cities. New heuristics used with LKH (e.g. POPMUSIC) can scale to such sizes and empirically show complexity that is nearly linear with respect to the number of cities. It seems that the proposed approach would have quadratic complexity, which would not scale to much bigger problem instances. Table 2 also suggests that the solution quality (optimality gap) becomes worse for bigger sizes. If there was strong evidence that the approach could scale to much larger instances, that would have added to the novelty of the paper.\n\n2. Insufficient comparisons: \na. The comparison to Gurobi's running time in Table 1 is misleading because in addition to outputting a solution, it also outputs a certificate of optimality. It is possible that Gurobi finds the optimal solution very quickly but then spends a large amount of time proving optimality. Since RL approaches don't prove optimality, it would be more fair to report Gurobi's time to first reach the optimal solution (and disregard proving time). This may turn out to be much smaller than the times reported in Table 1. \nb. It would be good to compare against the state-of-the-art TSP-specific algorithms (Concorde, LKH) as well. Even if a general-purpose RL approach does not beat them, it would be good to assess how much worse it is compared to the best expert-designed custom algorithms so that the tradeoff between human expertise and solution quality / running time is clear. \n\nIt would also be useful to give insight into what does attention buy for the kinds of problems considered. Why do we expect attention to be helpful, and do the results match those expectations?\n", "This paper is one of a sequence of works trying to learn heuristics for solving combinatorial optimisation problems. Compared to its predecessors, its contributions are three-fold. First, it introduces a tweak on the REINFORCE learning algorithm, outperforming more complicated methods. Second, it introduces a new model for combinatorial tasks which delivers interesting results on several tasks which are varied though related. Finally, it evaluates this model on many tasks.\n\n****Quality and clarity****\nThis is a very high-quality paper. \nThe writing is clear and sharp, and the reading experience is quite enjoyable (the witty first paragraph sets the tone for what is to follow), even if the text is at times a bit verbose. \nAnother point to commend is the honesty of the paper (see e.g. the comment on the performance of the model on TSP vs specialised solvers such as Concord).\nThe related work section is complete and well documented.\nFinally, the experimental results are clearly presented and well-illustrated.\n\n****Originality and significance****\nOn the theoretical side, the contributions of this paper are interesting but not ground-breaking. The REINFORCE tweak is close to other algorithms that have been tried in the last few years (such as indeed the one presented in Rennie et al, 2016). The model architecture, while successful, is not a large departure from the Transformer presented in Vaswani et al, 2017.\n\nMore significant is the complete set of experiments on a varied subset of combinatorial tasks, which showcases one of the promises of using machine learning for combinatorial optimisation: reusability of a single model for many tasks.\n\n****Conclusion****\nOverall, this is a nice, very well-written paper. Its contributions, though not ground-breaking, are significant to the field, and constitute another step in the right direction.\n\nPros\n- high-quality writing\n- very clear\n- complete experiments on a variety of tasks, some of which do not have optimal solvers\n- honest assessment of the model\n\nCons\n- the theoretical contributions are not ground-breaking (either the the tweak on REINFORCE or the model architecture)\n- the model is still far from obtaining meaningful results on TSP (although it's interesting to compare to previous learned models, only solving problems with 100 nodes also illustrates how far we have to go...)\n\nDetails\n- Dai et al has been published at NIPS and is no longer an arxiv preprint\n- the comparison to AlphaGo should either be expanded upon or scratched. Although it could be quite interesting, as it is it's not very well motivated." ]
[ -1, 7, -1, -1, -1, -1, 6, 7 ]
[ -1, 5, -1, -1, -1, -1, 5, 5 ]
[ "SkePlW15aQ", "iclr_2019_ByxBFsRqYm", "HkgKKfy56m", "BkeH3bYNn7", "rJeXVEsdnm", "Bke7cL5jnX", "iclr_2019_ByxBFsRqYm", "iclr_2019_ByxBFsRqYm" ]
iclr_2019_ByxGSsR9FQ
L2-Nonexpansive Neural Networks
This paper proposes a class of well-conditioned neural networks in which a unit amount of change in the inputs causes at most a unit amount of change in the outputs or any of the internal layers. We develop the known methodology of controlling Lipschitz constants to realize its full potential in maximizing robustness, with a new regularization scheme for linear layers, new ways to adapt nonlinearities and a new loss function. With MNIST and CIFAR-10 classifiers, we demonstrate a number of advantages. Without needing any adversarial training, the proposed classifiers exceed the state of the art in robustness against white-box L2-bounded adversarial attacks. They generalize better than ordinary networks from noisy data with partially random labels. Their outputs are quantitatively meaningful and indicate levels of confidence and generalization, among other desirable properties.
accepted-poster-papers
* Strengths This paper studies adversarial robustness to perturbations that are bounded in the L2 norm. It is motivated by a theoretical sufficient condition (non-expansiveness) but rather than trying to formally verify robustness, it uses this condition as inspiration, modifying standard network architectures in several ways to encourage non-expansiveness while mostly preserving computational efficiency and accuracy. This “theory-inspired practically-focused” hybrid is a rare perspective in this area and could fruitfully inspire further improvements. Finally, the paper came under substantial scrutiny during the review period (there are 65 comments on the page) and the authors have convincingly answered a number of technical criticisms. * Weaknesses One reviewer and some commenters were concerned that the L2 norm is not a realistic norm to measure adversarial attacks in. There were also concerns that the empirical level of robustness of the network was too weak to be meaningful. In addition, while some parts of the experiments were thorough and some parts of the paper were well-presented, the quality was not uniform throughout. Finally, while the proposed changes improve adversarial robustness, they also decrease the accuracy of the network on clean examples (this is to be expected but may be an issue in practice). * Discussion There was substantial disagreement on whether to accept the paper. On the one hand, there has been limited progress on robustness to adversarial examples (even under simple norms such as the L2 norm) and most methods that do work are based on formal verification and therefore quite computationally expensive. On the other hand, simple norms such as the L2 norm are somewhat contrived and mainly chosen for convenience (although doing well in the L2 norm is a necessary condition for being robust to more general attacks). Moreover, the empirical results are currently too weak to confer meaningful robustness even under the L2 norm. * Decision While I agree with the reviewers and commenters who are skeptical of the L2 norm model (and would very much like to see approaches that consider more realistic threat models), I decided to accept the paper for two reasons: first, doing well in L2 is a necessary condition to doing well in more general models, and the ideas and approach here are simple enough that they might provide inspiration in these more general models as well. Additionally, this was one of the strongest adversarial defense papers at ICLR this year in terms of credibility of the claims (certainly the strongest in my pile) and contains several useful ideas as well as novel empirical findings (such as the increased success of attacks up to 1 million iterations).
train
[ "SJehfLMzlE", "BkeP1htbxN", "H1ezI47-gV", "BJg5F5RxgV", "HygKL1jgg4", "HyeXrBUggN", "Syx6Z5egeN", "BJgb-yNJgN", "r1lcLW2TkN", "HJlmeQpHkN", "HJx15w00RQ", "Syg_IW86hm", "ryxW1EcRRQ", "Bkly6cc60Q", "HJgJxX5TRX", "r1e3-5Hh0m", "B1xLfvQnRm", "HkxISreiCm", "Bklz239KCX", "H1eKGidtnm", "B1lepXHlRm", "HylOZ5n3pX", "rJgv81jhpm", "Skl94a53p7", "BJx0pEqh67", "HylXLYOn6X", "S1xf3ovhpm", "S1lq-Lwn67", "SyxZxI6op7", "BkxAVy9sTm", "SJgHFyIip7", "B1xJuFmipm", "rJgJYHXjTX", "HygXxYFcT7", "H1xEPUkq6Q", "B1gBAL19aX", "SJxA1llFTQ", "H1gWe1xYTm", "SyxTL_yYp7", "ryxxv85_Tm", "SJlTkLwDTm", "HJgX7SEUT7", "r1e8nj0STm", "rkgG-58VpQ", "ryeApYUEpX", "BkxNSYINam", "HkgYkYLVaQ", "B1gzYu84p7", "Syg9SDL4a7", "Syef4UcmaQ", "r1lUDfqmam", "B1lLWoxqnQ", "rklwt0punX", "HygkWS4I3m", "H1xLD-frhX", "BJeSl4S0q7", "SJgOaOm0qX", "rJlHciFfqX", "SkggPmD-q7", "SkenYzvZ57", "SkxbKZwWc7", "S1ekOTSbc7", "BJgI-hH-9Q", "rkgisqr-9Q", "BJl1rFr-9X" ]
[ "author", "public", "author", "public", "author", "public", "author", "public", "public", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public", "official_reviewer", "public", "author", "author", "author", "public", "author", "public", "author", "public", "public", "author", "public", "author", "public", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "author", "author", "public", "author", "public", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "Regarding the specific points:\n-- We chose the best available baselines, i.e. classifiers from Madry et al. (2017), which happen to be trained with L_inf adversary. There seem to be no available models that are trained with L2 adversary, and the only paper that talked about such models, i.e. https://arxiv.org/pdf/1805.12152.pdf, reported L2 robustness that is weaker than L2 robustness achieved by training with L_inf adversary. Again more details are in the discussion with Aurko.\n-- Our Model 4's were trained with L_inf PGD with default hyperparameters from Madry et al.'s GitHub. We did not tune the hyperparameters, and you are right that there is probably room for improvement.\n\nOf course one may question everything on this forum. Luckily numbers never lie, and that is why we publish our models so that anyone can verify our results.\n", "Thanks for making things open-source! Greatly appreciate it.\n\nI agree with most things in your response. Yes, it came up because of the possibility that l2-training would make the network more robust to l2-perturbations than l-inf training.. which (l2-training) I feel should be used as a relevant baseline. To summarize, if I understand correctly, you did not use l-2 training as a baseline because you believe from past literature that l-inf training is better for l2-attacks. Is that a correct statement to make?\n\nI also raised another point: I think you might have missed this -- but have you tried studying the effect of varying the perturbation bound of the l-inf balls on the l-2 robustness in the Madry et al. setting? Since, you are not directly optimizing for l2-robustness, it makes sense to tune hyperparameters when making tangential comparisons. \n\nI do not want to criticize/attack your paper. I found it quite interesting and I am playing Devil's advocate to answer some questions I have and tease out things for my better understanding. Thanks for engaging the discussion.\n\nPS. I would not be very inclined to believe an anonymous reader claiming to have invalidated or validated your robustness claims, especially. Because of the nature of the forum, I believe anonymous comments (including mine) should be taken with a grain of salt. ", "Please see the dropbox link at the first paragraph of section 3.\n\nOne commenter has kindly tested our robustness numbers. Please see the comment titled \"Very well done evaluation\".\n\nIn the paper we stated that our model 4's were trained with linf adversary, but we made no claim about training with linf adversary vs training with l2 adversary. This only came up in the discussion with Aurko as an empirical observation.\n\nWe disagree with the statement that MNIST is only a sanity check. From robustness perspective, MNIST is far from a solved problem.", "Oops, I did indeed read the wrong table. Thanks for pointing that out.\n\nTo continue the discussion, your responses to Aurko Roy seem to be centered around MNIST. From what I believe, and what has been advocated in recent literature on adversarial defenses, MNIST should only be used as a \"sanity check\". Here, you seem to be making the claim that in general, across datasets, l2 training is worse than l-inf training. Can you confirm this for other data-sets? \n\nI'm curious because it seems highly unintuitive that l2 training would do worse than l-inf training for l-2 robustness.\n\nThe eps at which adversaries are generated governs the robustness of the model. Have you tried varying the size of the l-inf/l-2 balls for Madry et al. style training and see if the numbers don't improve much? I strongly feel that is a hyperparameter that needs to be tuned, more so when comparing the performance of an l-inf defense against an l-2 attack.\n\nWould you be open to making your models open-source?", "91.7% is from our Table 1, and it is for Madry's MNIST model with l2 epsilon of 3 and after 100 CW iterations. The number you were looking for is 13.9% in our Table 2.\n\nIt is true that Madry et al's two papers are not reporting on the same cifar models. However the contrast between below-10% robust under l2 epsilon of 100/256 and 39.76% robust under l2 epsilon of 320/256 is so large that one of the two is likely incorrect. Our measurement of their model suggests that Fig 6(d) in Madry et al. (2017) is likely the incorrect one.\n\nPlease also see our discussion with Aurko Roy, particularly the last few rounds regarding training with l2 adversary vs training with linf adversary.", "Have you checked with the authors this is the case?\n\nAlso, Table 4 has no entries for the l2-accuracy for an l-inf trained model I think. Correct me if I am wrong. So, there is no \"comparable\" version that corresponds to the numbers in the first version.\n\nAlso, your Table indicates 91.7% for eps=1.5 at 100 iterations, while Table 4 suggests 40% at eps=1.25 (I suspect it would be much worse tat eps=1.5). While they don't provide code, I am not sure the number of iterations for the entries in Table 4 is much more than 100. ", "We too were confused about Fig 6(d) in Madry et al. (2017), until we see \nhttps://arxiv.org/pdf/1805.12152.pdf\nwhich is from the same authors. There is evidence that they made mistakes in computing x coordinates when plotting Fig 6(d) in Madry et al. (2017), and underreported the robustness.\n\nThe last two rows in Table 4 in\nhttps://arxiv.org/pdf/1805.12152.pdf\nshow substantial robustness under l2 epsilon of 80/256 and 320/256. These results suggest that 100/256 would be a low bar, and the weak curve of Fig 6(d) in Madry et al. (2017) is highly unlikely.\n\nNow consider our measurements of Madry model in our Table 2. The numbers are in line with numbers in\nhttps://arxiv.org/pdf/1805.12152.pdf\n\nSo the most plausible explanation is that the authors used incorrect x coordinates when plotting Fig 6(d) in Madry et al. (2017). If we were to venture a guess, maybe they only added up deltas in the red channel rather than all three RBG channels.\n", "This doesn't make sense. If you look at Figure 6 in https://arxiv.org/pdf/1706.06083.pdf , at eps=100 the accuracy is nearly zero and note that this is at less than 100 steps of PGD. That eps=100 from Madry et al, corresponds to eps=100/255 if you normalize it (note that Madry's CIFAR pixels are not normalized).\n\nIn your paper, at eps=1.5 (in Madry's scaling, it's eps=380+), you quote ~14% accuracy for Madry. Isn't this strange? That even after you increase the budget the nearly 4 times, your accuracy is still quite the same. ", "I think it is unreasonable and a poor direction for members of the community to start posting requests for comparisons to concurrent submissions, especially ones made to the same conference! \n\nTo be clear I am in no way associated to this paper, but saw this while browsing. ", "Thanks for the thorough replies. I will keep these clarifications and updates in mind as I reassess the paper and discuss with other reviewers. ", "We would like to thank the reviewer for the upgraded rating and for the kind comments. And we thank the reviewer again for the valuable suggestions that help us improve this paper and future work. We'd be happy to answer further questions on the loss function.\n", "This paper presents a combination of methods that, together, yield neural networks that are robust to small changes in L2 distance. The main idea is to ensure that changing the input by a bounded L2 distance never changes the output by more than the same L2 distance. Then, the difference between the highest-scoring class and the second-highest scoring class provides a bound on how much the input must change. The trivial way to do this is to rescale the final output layer so that all of the magnitudes are very small; however, this would give no additional robustness at all. To counteract this, the paper introduces several additional heuristics for increasing the gap between the highest-scoring class and the second-highest scoring one. Adversarial training can be used to make the models even more robust. \n\nExperimental results on MNIST and CIFAR look impressive, although most are in terms of L2 distance, while most previous work optimizes L_infinity distance.\n\nThe methods described by this paper are similar to max-margin training, which is already known to be optimally robust to L2 perturbations for linear models (e.g., Xu et al. (2009)). This paper would be stronger with more discussion and analysis of this connection, although that might be work for a future paper.\n\nAlthough the method relies heavily on heuristics, the empirical results are promising. The analysis of the contribution of the heuristics is fairly thorough as well. The MNIST results are strong. The CIFAR results show improved robustness, though at reduced accuracy on natural images. A combination of robust and non-robust classifiers improves the accuracy somewhat.\n\nOverall, this is interesting work with promising empirical results. The biggest weaknesses are:\n\n- Limited theory. The loss function is particularly strange. \n\n- The majority of the comparisons focus on L2-robustness, but are comparing to a model optimized for L_infinity-robustness. (Thankfully, the authors also do some comparisons on L_infinity-robustness.)\n\n- Robustness comes at a cost in accuracy, though this is not uncommon for adversarial training.\n\nThe biggest strengths are:\n\n- Strong empirical robustness\n\n- Analysis of combinations of methods and their interactions: different loss function, different architecture, different weight constraints, and adversarial training are all evaluated together and separately.\n\n- Wide variety of experiments, including generalization on training data with noisy labels and analysis of the confidence gaps.\n\n\n\nQuestions for the authors:\n\n- For equation (4) in the loss function, why would rescaling the layers in the middle of the network be equivalent to a linear transformation (u1, u2, ..., u_K) of the output?\n\n- In equation (6), what is the average averaging over?\n\n- The connection between confidence gap and robustness is discussed empirically, as a correlation, rather than theoretically, as a bound. Doesn't the confidence gap give a lower bound on the minimum perturbation to change the predicted class?\n\n---------\n\nEDIT: After the author response, I remain positive about this paper. In addition to addressing my concerns, I admire the authors' patience in answering the concerns of other reviewers and commenters. I think that this is a solid paper that makes a good contribution to the literature on adversarial machine learning.", "Yes, the reference http://www.jmlr.org/papers/volume10/xu09b/xu09b.pdf is the one I was referring to.\n\nThank you for your answers. That addresses most of my concerns, though I may have to think more carefully about the loss function.", "Hi Chris,\n\nWe are sorry that we will stop responding to your questions. We feel that most readers of our paper and this page do not share your confusions, and further discussion would not help the purpose of this forum. We feel that our two rounds of comments are sufficiently clear.\n\nAs you requested, we will put a pointer on https://openreview.net/forum?id=HkxAisC9FQ so that you or anybody else can continue the discussion there.\n", "Hi again,\n\nRegarding gradient obfuscation. In classification f(x) is a vector, not a scalar. There is no ordering on vectors, and so there is no notion of monotonicity. So saying \"sigmoid is monotonically increasing, we have argmax(sigmoid(f(x0+delta)))==argmax(f(x0+delta))\" is false. Sigmoid is applied point wise to the *vector* f(x). So yes, you could use a point wise sigmoid layer to improve robustness. If f(x) was a scalar, I agree that adding a sigmoid would do nothing, but it isn't.\n\nIt looks like the authors in the other ICLR paper also used black box attacks (the Boundary attack), and found the black box attack to be a weaker attack than using PGD. If adding a sigmoid truly was gradient obfuscation, then the black box attacks should be able to get around the hypothesized obfuscation. But they don't. Again I suggest you comment on that paper so that the authors there can rebut your criticisms.\n\nRegarding Gouk et al: Straight from Section 4.1 you pointed to, they talk about using the L-1 and L-inf norms as well as the L-2 norm. In their experiments they compare all three norms.\n\nMy point regarding L-2 and L-inf norms is this. To summarize your method: You want to ensure the 2-norm of W is less than 1. This equivalent to keeping the 2-norm of W^t W (and W W^t) less than one. Since the infinity norm bounds the 2-norm of a matrix, it suffices to keep the infinity norm of W^t W (or W W^t) small. Which leads to two questions:\n\n1) Are you not concerned that the gap between the 2-norm and the infinity norm can be huge, growing in the worst case with the square root of the input dimension?\n\n2) Since all other matrix norms bound the spectral norm, why go to the work of computing W^t W and W W^t at all, when ||W||_2 is bounded by ||W||_infinity in the first place? Why not just control ||W|_infinity instead, as in Gouk et al?", "Hi Chris,\n\nConsider two classifiers: the first is f(x) and the second is sigmoid(f(x)).\nSuppose we have an adversarial example for the first classifier: f(x) classify x0 correctly but x0+delta incorrectly. In other words: argmax(f(x0+delta))!=argmax(f(x0)).\nBecause sigmoid is monotonically increasing, we have argmax(sigmoid(f(x0+delta)))==argmax(f(x0+delta)) and argmax(sigmoid(f(x0)))==argmax(f(x0)). Therefore argmax(sigmoid(f(x0+delta)))!=argmax(sigmoid(f(x0))). In other words, x0+delta is also an adversarial example for the second classifier.\nThat's why adding a final sigmoid layer should have zero effect on a network's robustness, and that's why their finding of \"inserting a sigmoid activation function improved model robustness\" is nothing but gradient obfuscation.\nAccording to Table 3 in their paper, their best model goes from 78.42% error to 100% error when tanh is removed.\n\nYour comments on L2 PGD vs CW describe exactly the phenomenon of gradient obfuscation. 10 PGD iterations being stronger than 1000 CW iterations suggests a situation of vanishing gradient. If CW is used properly 1000 iterations of it will be a much stronger attack than 10 PGD iterations.\n10 iterations of any attack are far too few for a meaningful robustness evaluation.\n\nWith gradient obfuscation and 10 PGD iterations, all measurements in https://openreview.net/forum?id=HkxAisC9FQ are meaningless.\n\nAs we said in the last comments, the proper way of evaluation is to use CW before sigmoid or softmax and call it with high iteration counts. That is what we do.\n\nYour statement of \"They have achieved better robustness results (in L2) than yours on CIFAR-10, by over 10% at L2 distance 1.5.\" is simply cherry picking data because it ignores our best model.\nIt's a pointless comparison anyway given that their numbers come from gradient obfuscation and 10 PGD iterations.\n\nIt seems that you have misread the Gouk et al. paper. Please see the last paragraph of Section 3.1 and first paragraph of Section 4.1 in their paper:\nhttps://arxiv.org/pdf/1804.04368.pdf\n\nIt seems that you are still confusing WTW with W, and your statement of \"what you are really trying to control is ||W||_\\infty\" makes no sense: our models are L2 nonexpansive, not L_inf nonexpansive. As we said in last comments, please see the text around equation (2) including footnotes. Measurements show that L2NNN's 2-Lipschitz constant is not far below 1, please see our response dated 11/16 to an earlier comment titled \"Estimates of the Lipschitz constant\" and our Figures 3 and 4.\n\nWe addressed batch norm in the appendix, and one simply needs to divide the scaling factors by the max one.\nThe trade-off between robustness and nominal accuracy has been discussed extensively in multiple discussions throughout this page, please read. This trade-off has been observed by previous works including adversarial training and adversarial polytope works, and it remains an open question whether such trade-off is a necessary part of life. However, one thing we know for sure is that gradient obfuscation is not the answer.\n", "Hi,\n\nThanks for the reply. \n\nRegarding point 1, on that paper's final sigmoid layer. You say that adding a final sigmoid layer will have no effect, but I disagree. Adversarial examples exploit network instabilities by finding directions which push incorrect logits to be large. A final sigmoid layer could clip such growth before the incorrect logit dominates the other labels. That is not gradient obfuscation, it is a reasonable robustness measure. It seems that this is what the other paper is reporting: with the same attack and attack hyper parameters (how else can you fairly compare models?), models with a final sigmoid layer perform better.\n\nRegarding your point 2. I agree it is hard to make a direct comparison between your numbers and theirs. Whether or not CW is better than L2 PGD is debatable. I just checked the Foolbox specs. Foolbox uses only 10 iterations in its L2 attack, whereas Foolbox's default CW uses 1000. So, given that the other paper finds consistently that L2 PGD beats CW (with only 10 iterations vs 1000!) it's hard not to conclude that L2 PGD is a better attack. I can't comment on how Foolbox has tweaked their implementation of CW, but I'd hope that they are nearly equivalent.\n\nI don't think it is cherry-picking to say that the other model outperforms yours by 10% for models trained without adversarial training. That is what your and their tables say.\n\nRegarding 3. It is fair to say that your method accounts for the entire input space, whereas theirs accounts for only data sampled from the data distribution. I believe though that the other paper reports all its numbers on test data, not training data, with good results. Given that, it may be that stabilizing a network on only the training data empirically stabilizes on the test data, since these two datasets are supposed to be drawn from the same distribution. It could be that it is not necessary to stabilize a network on the whole input space, but only on the data manifold.\n\nI suggest you comment on the other ICLR submission's OpenReview page, so that those authors have a change to fairly rebut your criticisms.\n\nRegarding Gouk et al. The Gouk paper does not use power iteration, which you erroneously stated below! They penalize by the L-infinity norm, no power iteration needed. You are also penalizing by the L-infinity norm. They also divide (‘project’) each layer by this norm, same as you. You haven’t addressed how your paper differs from Gouk et al.\n\nI’m aware of the difference between ||W^t W||, and ||W||. Since ||W^t W||_\\infty <= ||W||_1 ||W||_infinity, I don’t see why it is necessary to penalize by ||W^t W||_\\infty when what you are really trying to control is ||W||_\\infty. Why not just penalize by ||W||_\\infty? What do you gain by penalizing with ||W^t W||? Is it that you want to control both the 1-norm and the max-norm? My main point here is: the gap between ||W||_\\infty and ||W||_2 can be large in high dimensions. My feeling is that using the infinity-norm to control the 2-norm is problematic, and your poor test errors haven’t shaken this feeling.\n\nYou have not addressed two of my main points. Regarding my point on batch norm. Do you account for batch norm when computing your matrix norms? And how do you justify your poor test error, on the natural images? Ideally a robust network will have good test error and good robustness properties.", "Hi Chris,\n\nThere are a number of issues with \nhttps://openreview.net/forum?id=HkxAisC9FQ\n\n1) Questionable evaluation: Inserting sigmoid layer before attack.\nThe authors stated that \"Prior to the final softmax layer, we found inserting a sigmoid activation function improved model robustness. In this case, the sigmoid layer comprised of first batch normalization (without learnable parameters), followed by the activation function t*tanh(x/t), where t is a single learnable parameter, common across all layer inputs.\"\nThere is something wrong: Adding a final sigmoid layer should have zero effect on a network's robustness, because an adversarial example before would still be an adversarial example after.\nThese statements suggest that the authors applied attacks on the sigmoid outputs, or even worse, that they may have applied attacks on the softmax outputs. Such evaluation setup is a form of gradient obfuscation, and it is well known to artificially slow down gradient-based attacks and create a false sense of security.\nThe proper way of evaluating robustness, as measured by white-box defense, is to apply attacks on the logits themselves, i.e. the direct outputs of ReLU network (or any final computing layer). That's what we do.\n\n2) Questionable evaluation: Attack setup.\nFor evaluation, the authors use attacks implemented in Foolbox:\nhttps://arxiv.org/pdf/1707.04131.pdf\nhttps://github.com/bethgelab/foolbox\nwhich contains a modified version of CW attack. The authors stated that \"Hyperparameters were set to Foolbox defaults.\" The Foolbox version of CW attack uses a default of 1000 iterations. Among all Foolbox attacks, the authors concluded that L2 PGD is the strongest, stronger than Foolbox CW, and therefore used L2 PGD in all reportings.\nFirst, the authors should have used the original CW code at https://github.com/carlini/nn_robust_attacks.\nSecond, 1000 iterations are not enough to evaluate robustness, in light of the L2 robustness numbers from Madry et al. (2017) and Tables 1 and 2 in our paper. Neither PGD with low iteration count nor CW with low iteration count reveals the true robustness of a model.\nPoint 1) above makes the situation even worse. With gradient obfuscation, 1000 CW iterations may become equivalent to 100 iterations or less.\n\nConsidering 1) and 2), one has to take their numbers with a grain of salt.\n\n3) Reliance on training data coverage.\nIn our Section 4, we review related works as two big groups. The first group fortify a network around training data points: this includes both adversarial training like Madry et al. (2017) and gradient regularization like Ross & Doshi-Velez (2017). The second group bounds a network's responses to input perturbations over the entire input space. Our work belongs to the second group.\nTheir proposed method is fairly similar to Ross & Doshi-Velez (2017), and belongs to the first group. The common weakness of the first group is the reliance on training data coverage. While works in this group are able to fortify parts of the input space, specifically flattening gradients around training data points, there exists little control over parts not covered by training data.\n\nYour statement of \"...by over 10% at L2 distance 1.5\" is cherry-picking data: even if accepting their numbers as they are (big question mark by themselves), the difference would be only 1.2%.\n\nRegarding the Gouk et al. paper, please see our response dated 11/12 to an earlier comment titled \"related work\".\n\nRegarding your comments on matrix norm. It seems that you were confusing WTW with W, please see the text around equation (2) including footnotes. Measurements show that L2NNN's 2-Lipschitz constant is not far below 1, please see our response dated 11/16 to an earlier comment titled \"Estimates of the Lipschitz constant\" and our Figures 3 and 4.\n", "I’d like to direct you to two papers which you may have overlooked.\n\nThe first is another submission to ICLR 2019, “Improved robustness to adversarial examples using Lipschitz regularization of the loss” ( https://openreview.net/forum?id=HkxAisC9FQ ). They have achieved better robustness results (in L2) than yours on CIFAR-10, by over 10% at L2 distance 1.5. Moreover their method doesn’t degrade test error on unperturbed images, whereas your regularized networks have over 20% test error on CIFAR-10. I’d hope that a properly robust network should not have significantly worse test error. How do you distinguish your results from this other ICLR submission?\n\nSecond, there is an arXiv paper by Gouk et al, “Regularisation of Neural Networks by Enforcing Lipschitz Continuity” ( https://arxiv.org/abs/1804.04368 ) from this spring which has methods very similar to yours. They enforce either the L-1 or L-infinity norm of the weight matrices to be less than 1, which is nearly what you are doing (you enforce L-infinity on W W^t and W^t W). (You stated earlier in the comments that Gouk et al uses the power method, but from my reading of that paper this is not true – they use the explicit formulas for the L-1 and L-infinity norms.) And they enforce this constraint by a projection step, which if I understand your paper correctly, is very similar to what you are doing (your formula W’ = W / \\sqrt(b(W)) ). Could you comment on the merits of your approach vis-a-vis Gouk et al?\n\nIn addition, Gouk et al accounts for batch normalization – do you? I can’t tell if you have taken batch normalization into account when you compute your matrix norms. In effect batch normalization post multiplies the weight matrices by a diagonal matrix, which needs to be factored in. If you haven’t, that may explain in part why your test errors are not very good on the unperturbed images.\n\nA general comment about matrix norms, and your title. I can’t help but think that your title is a bit misleading: forcing the L-infinity weights to have norm less than one seems like a very crude way of also bounding the L-2 norm of the weight matrices. Although yes ||A||_2 <= ||A||_\\infty, we also have that ||A||_\\infty <= \\sqrt n ||A||_2. So the gap in the first inequality between these two norms can be very large, especially when we’re talking about convolution matrices with many channels. It seems to me that such a harsh projection, when the gap between these two norms is large, would seriously degrade network performance in practice. This may also explain your poor test errors.", "Summary:\nThe paper presents techniques for training a non expansive network, which keeps the Lipchitz constant of all layers lower than 1. While being non-expansive, means are taken to preserve distance information better than standard networks. The architectural changes required w.r.t standard networks are minor, and the most interesting changes are made to the loss minimized. The main claim of the paper is that the method is robust against adversarial attacks of a certain kind. However, the results presented show that a) such robustness comes at a high cost of accuracy for standard examples, and b) even though the network is preferable to a previous alternative in combating adversarial examples, the accuracy obtained in the face of adversarial attacks is too low to be of practical value. Other properties of the networks, explored empirically, are that the confidence of the prediction is indicative of robustness (to adversarial attacks) and that the networks learn better in the presence of high label noise. \nIn short, this paper may be of interest to a sub-community interested in defense against certain types of adversarial attacks, even when the defense level is much too low to be practical. I am not part of this community, hence did not find this part very interesting. I believe the regularization results are of wider interest. However, to present this as the main contribution of L2NNN more work is required to find configuration which are resilient to overfit yet enable high training accuracy, and more diverse experiments are required.\nPros:\n+ the idea of non expansive network is interesting and important\n+ results indicate some advantages in fighting adversarial examples and label noise\nCons:\n- the results for fighting adversarial examples are not significant from a practical perspective\n- the results for copying with label noise are preliminary and require expansion with more experiments.\n- the method has costs in accuracy, which is lower than standard networks and this issue is not faced with enough attention\n- presentation clarity is medium: proofs for claims are missing, as well as relevant background on the relevant adversarial attacks. The choice to place the related work at the end also reduces presentation clarity.\n\nMore detailed comments:\nPages 1-3: In many places, small proofs are left to the reader as ‘straightforward’. Examples are: the claim in the introduction, in eq. 2, in section 2.2, section 2.3’ last line of page 3, etc.. While the claim are true (in the cases I tried to verify them long enough), this makes reading difficult and not fluent. For some of these claims I do not see the argument behind them. In general, I think proofs should be brought for claims, and short proofs (preferably) should be brought for small claims. Leaving every proof to the reader as an exercise is not a convenient strategy. \nPage 4: The loss is complex and its terms utility require empirical evidence. The third term is shown to be clearly useful, enabling a trade off between train accuracy and margin. However, the utility of terms 4) and 5) is not verified. Do we really need both these terms? Cannot we just stay with one?\nThe main claim is robustness w.r.t “white-box non targeted L2-bounded attacks”. This seems to be a very specific attack type, and it is not explained at all in the text. Hence it is hard to judge the value of this robustness. Explanation of adversarial attack kinds, and specifically of “white-box non targeted\nL2-bounded attacks” is required for this paper to be a stand alone readable paper. Similarly ‘L_\\infty’-bounded attacks, for which results are shown, should be explained.\nTable 1,2: First, the model architecture used in these experiments is not stated. Second, the accuracy of the ‘natural’ baseline classifier, at least in the MNist case, is somewhat low – much better results can be obtained with CNN on MNist. Third, the accuracies of the suggested robust models are very low compared to what can be obtained on these datatsets. Forth, while the accuracies under attack of the proposed method are better than those of Madri et al., both are quite poor and indicate that the classifier is not useful under attack (from a practical perspective).\nPage 6: The classifiers which share the work between an L2NNN network and a regular more accurate network may be interesting, as the accuracies reported for them are significantly higher than the L2NNN networks. However, the robustness scores are not reported for these classifiers, so it is not possible to judge if they lead to a practical and effective strategy.\nPage 7: For me, the results with partially random labels are the most interesting in the paper. The resistance of L2NNN to overfit and its ability to learn with very noisy data are considerably better than the suggested alternatives.\nRelevant work not mentioned “Spectral Norm Regularization for Improving the Generalizability of Deep Learning” - Yuichi Yoshida and Takeru Miyato, Arxiv, 2017.\n\nI have read the rebuttal.\nThe discussion was interesting, but I do not see a need to change my assessment.\nThe example of ad-blocking in indeed a case (the first I encounter) where l2- perturbated adversarial examples can be useful for cyber attack. The other ones are less relevant (the attacks are not based on adversarial attacks in the sense used in the paper: images crated with small gradient-direction perturbations). Anyway talking about 'attacks on a self-driving car' are still not neaningful to me: I do not understand what adversarial examples have to do with this.\nI do not find the analogy of 'rocket improvements and moon landing' convincing: in 69 rocket improvements were of high interest in multiple applications, and moon landing was visible over the corner. \n\n", "Great, thank you for checking those Jacobian norms. Reporting those numbers very much strengthens your results.", "Let us answer your later question first. Pixels are normalized to [0,1] for all runs in our paper, for both MNIST and CIFAR.\n\nNow, as promised, the following are average L2 norm of Jacobians of logits with respect to inputs, averaged over the first 1000 images in MNIST test set.\n\nModel 2 (Madry et al. (2017)): 10.818453\nModel 3 (L2NNN with no adversarial training): 1.054181\nModel 4 (L2NNN with adversarial training): 0.8331261\n\nA few things to note:\n-- This is a surrogate for local Lipschitz constants, in particular it is measured at the nominal point and not over a neighborhood.\n-- L2 norm of Jacobian for Models 3 and 4 can be larger than 1, because we built them as multi-L2NNN classifiers, please see the first paragraph of Section 2.4.\n-- The comparison between Model 3 and Model 4 is consistent with our hypothesis.\n-- We would not take the Model 2 number at face value, because one could argue that Model 2 should be scaled down by a constant before making this measurement. This scaling is a tricky issue, please see our response to an earlier comment titled \"Very well done evaluation\".\n", "We would like to thank the three reviewers for the many helpful suggestions, and we are grateful for the extensive comments from others as well. We have made our best effort in revising the paper within the page limit for the main text and only enlarging the appendix. There are a number of places where we would have liked to elaborate more and we hope we will have a chance to do so using more space if this paper is accepted.", "Hi Aurko,\n\nWhen comparing two models, one has to decide on a common setup.\n\nIf one chooses Madry et al.'s setup of L2 defense evaluation, then the comparison is as we stated:\nTraining with L_inf adversary produces 90% against epsilon of 4.\nTraining with L2 adversary produces 63.73% against epsilon of 2.5.\n\nIf one chooses our setup of CW attack with high iteration count, then both the above numbers will reduce.\nTraining with L_inf adversary produces 7.6% against epsilon of 3.\nTraining with L2 adversary produces ?.\n\nUnfortunately we do not have access to their L2-adversary-trained model to fill in the question mark above. If one believes that Madry et al.'s setup of L2 defense evaluation extrapolates, that question mark is likely a very small number.\n", "If you look at Table 4 of Tsipras et al, on MNIST with an epsilon of 2.5 an l2 trained model gets 63.73% robustness (as you point out), while the l_\\infty trained model on epsilon 3.0 from Madry et al gets 7.6% while L2NN gets 24.4% (Table 1 of this submission). Similarly on CIFAR-10 with an epsilon of 1.25 an l2 trained model of Tsipras et al gets 39.76% (Table 4), while from your Table 1 the l_\\infty trained model with epsilon 1.5 from Madry et al gets around 9% while L2NN gets 20.4%. \n\nGranted the epsilon of Tsipras et al is slightly smaller than in your case, still the robustness of the l_\\infty trained model is orders of magnitudes smaller than the l_2 trained model against an l_2 adversary. The robustness of L2NN also seems much worse (epsilon is slightly more, so it is not strictly comparable), but that's why I believe this work is missing a crucial baseline - comparison to an l_2 adversarially trained model.", "We do not mind at all that this debate happens at our paper's page, and if people wish to continue the discussion please do.\nHowever we the authors will stop responding to this particular subject, heeding the sage words of the Area Chair.", "Great, thanks for your speedy response. That clarifies my question. I'm very curious to see evidence that the local Lipschitz constant is reduced via your method.\n\nOne follow up question -- are your l2 distances measured on [0,255] pixels, or pixels normalized to [0,1]?", "Hi Aurko,\n\nThank you and that is precisely the kind of runs we were looking for. A closer look at their numbers actually reinforces our argument.\n\nAccording to Table 4 in https://arxiv.org/pdf/1805.12152.pdf, the best MNIST L2 robustness by training with L2 adversary is 63.73% robust accuracy against epsilon of 2.5.\nAccording to Figure 6 in Madry et al. (2017), the L2 robustness by training with L_inf adversary is about 90% against epsilon of 4.\nNote that both papers are from the same authors.\nIn other words, by their own assessment, training with L_inf adversary produces stronger L2 defense than training with L2 adversary.\nThis exactly supports our argument and is consistent with our own experience as stated in our last response.\n\nHaving said the above, there is clearly something that Madry et al. know while we do not, which is how to convert PGD to an efficient L2 adversary, and we will try and find out.\n", "See https://arxiv.org/pdf/1805.12152.pdf (Figure 1)", "First off, thanks to everyone for the thoughtful discussion. I appreciate the area chair’s commitment to evaluating the technical details of the methods proposed in this paper, and I am not passing any judgement on the specifics of this proposed method. I would also like to apologize to the authors, I am trying to have a broader discussion about how this field can have a larger impact rather than trying to get any paper rejected. I imagine it must be frustrating to feel like you suddenly need to defend the motivation of your work when many similar prior works have been published, and a decent subset of the ML community currently finds lp robustness in itself an interesting topic of study. However, I do hope that future researchers interested in adversarial robustness consider my recommendation and begin evaluating on more general out-of-distribution inputs, at least in addition to lp robustness. Doing so can only help increase the impact of their work --- it will make their work interesting to a larger subset of researchers (and potentially satisfy reviewers who don’t find lp robustness interesting), and will help broaden our understanding of model performance in non iid settings. Plus, it will have the added benefit that if later researchers falsify your lp-robustness numbers you will still have an irrefutable evaluation to fall back on. Researchers are of course welcome to report lp robustness if they find it interesting, but in my opinion by remaining fixated on small worst-case perturbations we are missing an opportunity to better understand and improve model performance in more general and realistic settings. Just as the authors identify that l2 robustness is a necessary step towards secure systems, so is achieving 0 test error with respect to distributional shift. It also seems likely that defenses which are truly making progress on this problem will also show improved robustness to distributional shift, indeed it has already been shown that adversarial training does help improve model robustness to a host of more general image corruptions (see Section 5.2 of https://openreview.net/pdf?id=HJz6tiCqYm ).\n\tRegarding the security motivation. Reviewer2 correctly points out that most ML researchers (myself included) are not security researchers, and many of us have not spent the time thinking seriously about threat modeling. I have had the benefit of having discussions on this topic with security researchers as well as consulting with a product team over the threats their specific system faces, and doing so has changed how I view adversarial robustness. What I can say is that for most systems I’m aware of, I would not recommend that practitioners deploy a method that increases test error while improving lp robustness. Doing so would only reduce the security of the system. In the case of an ad-block system, increasing test error would increase the rate at which non-adversarially modified ads get past your system. For the small subset of sophisticated attackers, the gains in lp robustness would only force them to be slightly more clever in the modifications they make. If their company logo is an elephant, perhaps they could try shifting the location of the logo in their ad, similar modifications have been shown to cause model errors for object detection systems https://arxiv.org/abs/1808.03305. I would recommend the introduction of this paper have a more nuanced discussion around the security motivation of small adversarial perturbations. If there is a system for which we feel like this threat model is relevant, be specific and explain why. If your method increases test error, explain why that is tolerable for the considered application.\n Thanks again for the discussion everyone. Overall, I want the same thing that adversarial example researchers want, which is more secure and robust models. I will be at NIPS and would be excited to discuss this topic in detail with anyone in person!\n", "Thank you for the interest and we're happy to clarify.\n\nFor the first point, please see our answer to Aurko Roy's comment.\n\nFor the second point, we want to clarify a few things. We have a guarantee that the Lipschitz constant of an L2NNN is strictly no great than 1. Our hypothesis on local Lipschitz constant is regarding the effect of adversarial training on L2NNNs. Initially we expected that the gap between Model 3 (L2NNN with no adversarial training) and Model 4 (L2NNN with adversarial training) will fade away as CW attacker uses more iterations. Tables 1 and 2 suggest the opposite, i.e., that the benefit of adversarial training is permanent on L2NNN and is not just making examples difficult to find. To explain this phenomenon, we cite Hein & Andriushchenko (2017) and hypothesize that adversarial training on L2NNNs reduces local Lipschitz constants and thereby enlarges the actual robustness ball.\n\nWe will measure L2 norm of Jacobians, albeit only as a surrogate for local Lipschitz constants, and report back here. Please give us a day or two, we want to finish a revision first.\n", "I enjoyed reading this paper. I have a couple questions.\n\nFirst, to reiterate on the comment below, during adversarial training, do you adversarially perturb in the signed gradient direction (FGSM), or the L2-normalized gradient direction? It seems to me because you are measuring robustness in L2, the latter should be used. I doubt FGSM will be effective, since this is not necessarily perturbing images in a large L2 measurable direction.\n\nSecond, you hypothesize your method reduces the local Lipschitz constant (modulus of continuity). Do you have any hard evidence to support this claim? There have been several papers now which provide reasonable and effective ways of estimating the Lipschitz constant of a network, at least locally. One simple method is to calculate the L2 norm of the Jacobian on a subset of the test images. The confidence gap is only part of the picture, but it does not correspond directly to the Lipschitz constant. I'd hope that you should see a noticable decrease in the L2 norm of the Jacobian using your regularization method.", "Hi Aurko,\n\nThank you for the interest.\n\nAs we stated in the paper, Model 2's in Tables 1 and 2 were downloaded from Madry et al.'s GitHub pages (links in footnote on page 4). To be more specific, they were fetched under name \"secret\": these were released after they closed the black-box leaderboards and match what were reported in their paper.\nIt is true that Model 2's were trained with L_inf adversary. However, let us quote from Madry et al. (2017): \"our MNIST model retains significant resistance to L2-norm-bounded perturbations too -- it has quite good accuracy in this regime even for epsilon=4.5.\" and \"our networks are very robust, achieving high accuracy for a wide range of powerful adversaries ...\" In other words, Madry et al. do not see the use of L_inf attacker in training as a limiting factor to L2 defense.\nWe are not aware of any published defense results that beat Madry et al. (2017) as measured by any norm. Please see also Athalye et al. (2018) for a comparison between Madry et al. (2017) and a set of other defense works. We are also not aware of any published MNIST or CIFAR models that were trained with L2 adversary and achieved sizable white-box defense.\nAnother fact to consider is that our Model 4's were trained with the same L_inf attacker (PGD with default hyperparameters from Madry et al.'s GitHub) and that improved L2 robustness as reported in Tables 1 and 2.\n\nIt is unclear that your suggestion would work in practice. The first question to ask is should one clip after all PGD iterations or clip per iteration.\nIf one chooses to clip after all PGD iterations, then this new adversary is not much different from PGD with a smaller L_inf epsilon, and it's more likely to weaken the effect of adversarial training than help it.\nIf one chooses to clip per iteration, then for each PGD iteration, we need to solve for the crossing point between a sphere and a line, where the line does not cross the center of sphere except for the very first iteration, and where the sphere has been modified by value range of each input entry. This is a quadratically constrained quadratic programming problem, and solving it per iteration would make PGD adversarial training much more expensive if not prohibitive, and it is difficult to implement on GPU.\nBut by all means, we'd encourage you to do so, improve on Madry et al. (2017)'s L2 defense, and publish if it succeeds.\n\nAgain, we are not aware of any published models from successful adversarial training with L2 adversary. We ourselves have made an unsuccessful attempt to use CW L2 attack in training, and it did not work because L2 attacks with low iteration counts do not seem to help our models yet we cannot afford L2 attacks with high iteration counts in the training loop. As a result, we decided to use the original PGD to build our Model 4's in Tables 1 and 2, and that gave them a nice boost in L2 robustness.\n\nWe would be very interested if someone demonstrates successful adversarial training with L2 adversary, as we want to learn from him/her to improve our Model 4's and we would be happy to include more competitors in Tables 1 and 2.\n", "When computing the PGD step for the model from Madry et al, do you use sign(gradient) or do you normalize the gradient by it's \\ell_2 norm, when taking a single step? If you use the former then it seems like an unfair comparison since you are attacking it in an \\ell_2 ball. In any case it would be interesting to see how PGD training w.r.t to the \\ell_2 ball (so that you normalize the gradients by the \\ell_2 norm) compares to the proposed method.", "Hi Justin,\n\nOn the first example of ad-blocking.\nAn ad publisher has a content that he/she wants to deliver. That content, if without perturbation, would be correctly handled, i.e. blocked, by perceptual ad-blocking. Hence he/she has a motivation to deliver that content by adding small perturbations.\nHumans are good at filtering out ads, and a spammer who uses the lower image in Figure 2 of https://arxiv.org/pdf/1712.03141.pdf would be an unsuccessful spammer, because nobody would pay attention to contents in such an image. A successful spammer is one that can take the upper image, add small perturbations, and deliver it to people's inboxes. For phishing and spam attacks it's even more important that people can not recognize them as such, and imperceptible perturbations are useful.\n\nOn the second example of face recognition.\nNowadays many people use their faces to unlock cellphones. If someone can print eyeglass frames to gain access to other people's devices, it's a big security gap for millions of people.\nBy the way, the relative magnitude of an eyeglass frame with respect to a face is roughly on par with the relative magnitude of L2 norm of 3 with respect to an MNIST image.\n\nOn the third example of stop sign.\nIf a stop sign gets knocked over or covered, a policeman will correct the situation. If a stop sign has four stickers on it, nobody would bother until an accident happens.\nBy the way, the relative magnitude of those four stickers with respect to the whole stop sign is roughly on par with the relative magnitude of L2 norm of 3 with respect to an MNIST image.\n\nSome of Justin's arguments seem to be that L2 and L_inf norm metrics do not constitute a sufficient condition for a robust classifier. That is absolutely true. For example, if one has a MNIST classifier that is 90% robust against L2 norm of 3 and L_inf norm of 0.3, it might still break down if an input image is rotated by an angle, while a truly robust classifier like a human would not change its decision. By the way, the \"lines\" attack and common corruption benchmark do not constitute a sufficient condition either, and the lines-attack pictures on Madry et al. (2017) seem more excusable than our Figure 5 (nonetheless we'll be happy to evaluate our models under those conditions).\n\nThe flip-side question is whether robustness as measured by L2 and/or L_inf norm metrics is a necessary condition for a robust classifier. The answer is a big yes. We must learn to walk before we can run, and frankly the status quo of neural-network robustness research is at crawling stage. While we are crawling, we do not have the luxury to look down upon people who are working to meet necessary conditions, and rejecting defense papers based on an argument of not having a sufficient condition would only hinder progress and make those more hefty goals the more difficult to reach.\n\nWe agree with many points Justin made in this paper\nhttps://arxiv.org/pdf/1807.06732.pdf\nHowever, if the conclusion is to suppress the \"perturbation defense\" literature, that would be wrong. In our opinion, advances in white-box defense, a.k.a. robustness, a.k.a. generalization, of neural networks as measured by L2 norm and/or L_inf norm metrics are not only valid but also essential topics for the deep learning community, and our paper represents a big step forward. Given the size of improvement, L2NNN is likely a key ingredient in future truly robust solutions, which people have little clue yet. When people do find those solutions, there would be profound impacts across the board on both security and generalization.\n", "We strongly agree with AnonReviewer2's arguments.", "First of all, adversarial examples give us insights into the limitations of current machine learning methods. This is valuable even if there are no practical attacks that represent an immediate security threat.\n\nSecond, adversarial training methods allow us to enforce additional constraints on how we generalize from data. In some cases (e.g., computer vision), we have reason to believe that small perturbations of the input should not change the predicted label. This is another kind of background knowledge that can improve generalization.\n\nThird, there are indeed potential security risks! Here's a recent example:\n https://www.cs.dartmouth.edu/farid/downloads/publications/eusipco18.pdf\n\nSummary: People who manipulate images can disguise their manipulations by printing out the image and taking a picture of it. The result is a real photograph of a manipulated photograph. Convolutional neural networks can be trained to detect these attacks, but they can also be fooled by small perturbations to the input. Extensive retraining reduces this vulnerability somewhat.\n\nThe reason why most adversarial machine learning work doesn't have a good threat model is that most machine learning researchers are not security researchers. But that doesn't mean there aren't risks, and it doesn't make the work useless.", "Thanks to the authors and to the reviewers and various other commenters for the vigorous discussion surrounding the L2 threat model. While this can no doubt be discussed at great length (and all participants should feel free to do so), I want to note that I am most interested in judgments regarding the specific technical content of this paper. It is clear that there are both a number of researchers excited about exploring the Lp-norm threat model, and a number who are deeply skeptical of the model. I am very familiar with the arguments in both directions and will take both viewpoints into account in my eventual meta-review.\n\nSo by all means continue discussing, but consider turning your energy towards the specifics of the paper rather than whether this general area is worth exploring. This has the greatest chance of influencing the eventual decision.\n\nBest,\nICLR Area Chair", "Such high ratings and evaluations for papers on adversarial robustness to L_2 or L_inf attacks which have a completely unrealistic threat model while other more practical papers with real world impact struggle, seems completely unreasonable to me.", "The examples 1 and 3 you are giving of adversarial examples for real world systems only explore small perturbations without much justification for doing so. For the ad-block case, the attacker has full control of the image and can design any image from scratch that does not match the models' hash. Why should the attacker restrict themselves to small perturbations of a correctly handled image? For street signs an attacker could knock it over, construct a large adversarial “yard sale” sign placed next to the sign, or just place a bag over it. Given how brittle classifiers are on out-of-distribution inputs, it’s plausible that some signs may be misclassified just because it’s a foggy day. Example 2 was discussed in https://arxiv.org/abs/1807.06732 . Glasses won’t be useful in settings where a security guard can match the person’s face to the face returned by the image recognition system (they could also be instructed to have people remove all accessories before being scanned). Even for some settings where no guard is present and checking the model's output, small l_2 perturbations do not capture the action space of the attacker.\n\nIt's clear the adversarial example field is popular and receives many citations, but that doesn’t mean on its own that this is well motivated security research. What some critics are asking for is that security-motivated work perform a realistic assessment of the threats facing actual systems that is unbiased by the \"surprising\" phenomenon of small adversarial perturbations. The papers you cite seem more interested in attacking systems with small perturbations rather than properly designing a threat model that considers all of the options available to the attacker. There is a clear mismatch between how real world attackers typically break actual systems and what the adversarial example field currently focuses on. Consider for example the adversarial image spam shown in Figure 2 of https://arxiv.org/pdf/1712.03141.pdf, this is what attackers actually do. Anyone who has browsed Youtube has probably encountered adversarially modified videos that have been uploaded to evade copyright detection, such modifications typically involve very large and obvious transformations to the input.\n\nYour evaluation only demonstrates small improvements to a toy threat model. Can you provide evidence that your method is progress towards securing systems in more realistic threat models? If you can demonstrate that your method improves model generalization on out-of-distribution inputs then that could be more convincing. For example, the madry adversarial defense was 0% robust to the “lines” attack defined in Section 5.1.1 of https://arxiv.org/abs/1807.06732, does your method improve robustness to this unseen transformation? Even better, one could evaluate on the recently proposed common corruption benchmark https://openreview.net/forum?id=HJz6tiCqYm, which considers a host of realistic image corruptions that models may face at deployment.", "We are happy to comment on relations to these works.\n\nThe Miyato et al. paper has a different way of approximating the spectral radius of a weight matrix. In place of the strict bound of (2) in our paper, they approximates the current spectral radius based on a companion vector, which is intended to approximate the top singular vector at the moment and which is updated through power iterations. The up side of their approach is that it can be computationally cheap, in fact they do just one power iteration to the companion vector after each training batch. The down side of their approach is that their spectral radius is a coarse approximation: for example, consider a scenario where two top singular values are close in magnitude, and the companion vector represents v1, one of the two corresponding singular vectors v1 and v2; their regularization would suppress the first singular value, and after a few batches the second singular value becomes dominant; at this point, it would take many power iterations to move the companion vector from v1 to v2, and one power iteration per batch certainly would not make it. When there are more singular values with similar magnitudes, the situation gets even worse. The end result is likely that their models are expansive. The empirical results suggest that they improve GANs, it's unclear how they would perform under adversarial attacks.\n\nThe Tsuzuku et al. paper differs from us in a number of ways. To estimate Lipschitz constants, they follow Miyato et al. and use the same companion-vector approach. As we explained earlier, this approach has its limitations. They have a different way of modifying the loss function than ours. There does not seem to be anything that competes with our architecture changes. Their MNIST results do not seem strong; we cannot comment further on their empirical results as they only reported CW attacks with 100 iterations.\nWe would like to bring it to AnonReviewer1's attention that this is an adversarial defense work that gets accepted into NIPS with weaker results than ours.\n\nThe Scaman and Virmaux paper is on analysis of Lipschitz constants and not on optimization. In terms of analysis, the big difference between their AutoLip and ours is that they calculate Lipschitz constants of linear layers through power methods rather than we using bound of (2). Note that there is no companion vector here as it's one-shot analysis. Power methods would give a tighter estimation than our bound of (2), however they are too expensive to use in training a neural network and hence do not have practical implications for building robust models. Their SeqLip algorithm gives tighter bound, but is even more expensive, than AutoLip, and similarly would be difficult to use in training models.\n\nThe Gouk et al. paper also uses power method to estimate L2 Lipschitz constants of linear layers, and this is the main different from us in terms of regularization. We have not found an explicit statement on whether they use a companion vector or start from a random vector each time. Some statements suggest that they do use a companion vector like Miyato et al., for example, they mentioned for one experiment they only do one power iteration. As discussed earlier for Miyato et al., this approach has its limitations. In fact, the authors acknowledged multiple times in the text that they are underestimating the L2 Lipschitz constants of linear layers. There are no robustness results.\n\nThe Sedghi et al. paper is an interesting paper and they invented a way to compute the Lipschitz constant of a convolution layer. If their proof is correct, it would produce a tighter bound than ours, with a complexity that is lower than power methods. It seems that the computation cost is still fairly high, and hence they use it once every 100 iterations to regularize the convolution layers, and that resulted in improved nominal accuracy on CIFAR-10. It is unclear whether this is applicable in training robust models, especially if we can only afford to do it once in a while, -- it might be and is worth looking into and we thank the commenter for the reference. There are no robustness results in their paper. Note also that this is for convolution layers only.\n", "Please consider the following three examples.\n\nThe first example is from last week:\nhttps://arxiv.org/abs/1811.03194\nIt demonstrated that ad-blocking based on neural networks is vulnerable and easily defeated by adversarial examples. Furthermore, if ad-blocking based on neural networks is deployed, it \"would engender a new arms race that overwhelmingly favors publishers and ad-networks.\" Also because ad-blocking based on neural networks needs to run with a high privilege level inside a browser, it would \"introduce new vulnerabilities that let an attacker bypass web security boundaries and mount DDoS attacks.\"\n\nThe second example is:\nhttps://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf\nIt demonstrated that by printing eyeglass frames, one person can impersonate another person (a specific choice) in front of a state-of-the-art face-recognition algorithm. Please see their figure 4, the authors can pretend to be Milla Jovovich and Carson Daly by just wearing glasses.\n\nThe third example is from CVPR 2018:\nhttps://arxiv.org/pdf/1707.08945.pdf\nIt demonstrated that by putting just four stickers on a stop sign, a neural network would recognize it as a \"speed limit 45\" sign, and it would make the same mistake consistently from different distances and angles. The authors also performed field test in a moving vehicle.\n\nAs we mentioned, currently the attack-side research has an upper hand over the defense side, and they are only getting better and entering the physical world more and more. It is the more reason to encourage research on adversarial defense.\n\nPeople in the security field are concerned enough that a security conference accepted Carlini & Wagner (2017a) and gave it best student paper award, and it has received more than 500 citations by today. ICLR 2018 accepted Madry et al. (2017) which has received more than 250 citations.\n\nRegarding the reviewer's second point. Allow us use an analogy: before 1969, the scientific community should not reject paper on rocket improvements on the ground that nobody had landed on the moon.\n", "I think the answer is 'no', and this is the main reason I think the paper should be rejected.\n\nThe problem is two folded: \n1) I do not think adversarial attacks a real threat on AI systems.\nI did not see here, or in any other paper, a presentation of a convincing scenario in which adversarial attacks can be used to cause harm which cannot be caused with much simpler means. Recall that adversarial attacks are creating images which look of a certain class to humans, yet a different class to machines. Where and how can these examples be used for cyber attack purposes? How are they a threat?\nMoreover, white box examples require acquaintance with the model to produce - an even harder task for the attacker.\nFor example: the authors say \"It is absolutely true that we cannot put a 24.4%-robust classifier in a self-driving car and declare mission accomplished\" - before agreeing that 24.4% is not good enough, one should first argue that one can attack a self-driving car with adversarial attacks. I do not currently see how: if someone malicious has gained access to the camera of a self driving car, why would he want to present it with adversarial examples? instead he can just feed it with images of a fake reality and cause harm directly.\n\nI would be glad to hear arguments/examples showing that adversarial examples are of cyber importance or of any practical importance. This may change my view of the topic and the paper.\n\n2) the second point was already mentioned in the previous discussion: if adversarial examples could be used to cause harm, the current method would not help. It just does not defend well enough (as well as all other methods).\n\nIf such a paper is to be accepted, I think a thorough discussion is required in its introduction to explain why adversarial examples are of interest at all. This is far from being clear. \n\n", "We thank the reviewer for the thoughtful review and many helpful suggestions. We are updating the paper to incorporate some of the suggestions and will post a revision soon. In the responses below, related points are grouped together and ordered roughly by significance.\n\n1) Regarding the third condition (preserving distance).\nWe thank the reviewer for pointing out presentation issues related to the third condition.\nIn short, the second and third conditions are two aspects of enlarging confidence gaps: the second condition does so by modifying the loss function, while the third condition does so by modifying the network architecture. The practical embodiments of the third condition include two-sided ReLU, norm-pooling, and a few more in appendix A.\nIt is true that we did not derive two-sided ReLU or norm-pooling from a mathematical formulation of the third condition. Rather we started from a heuristic notion, as the reviewer put it, of preserving distance, and hypothesized that popular non-linear functions like ReLU and max-pooling unnecessarily restrict confidence gaps (see Sections 2.2 and 2.3), and hypothesized about two-sided ReLU and norm-pooling as improvements on preserving distance, and empirically verified their effects in enlarging confidence gaps and improving robustness.\nThe argument that \"a network that maximizes confidence gaps well must be one that preserves distance well\" was meant to say that preserving-distance-well is a necessary property for an L2NNN with large confidence gaps, and hence motivate architecture changes. Again the second condition is about the loss function and the third condition is about architecture.\nResults in Table 3 suggest that architecture choices are important for nonexpansive networks, specifically that some non-linear functions that are not in common practice work better than the more standard ones. These unusual functions, two-sided ReLU and norm-pooling, have the property that they do a better job at preserving distance and let the parameter training, rather than architecture, determine what information is thrown away. Empirical results support that these functions are best practice when used in nonexpansive networks. In contrast, preserving distance is less important in ordinary networks because parameter training can choose weights that amplify distances arbitrarily. There are likely more architecture choices for nonexpansive networks which may improve future results, and architecture exploration is one of our future directions.\n\n2) Regarding comparison against Kolter & Wong (2017).\nWe did not realize that the Kolter & Wong (2017) models are available for download. The mention of scalability in Section 4 was part of literature review and not intended as an excuse. We can certainly try and put the Kolter & Wong (2017) models through the same L2-defense comparisons as in Tables 1 and 2. This may take some time as we need to port the models to be compatible with the CW attack code, and we will report back here.\nFor L_inf defense, we did report a comparison. Table 4 shows our L_inf defense results under the same epsilon 0.1 as used in Kolter & Wong (2017). The results suggest that the measured L_inf defense is roughly on par.\nWe will also add reference to their follow-up paper: https://arxiv.org/abs/1805.12514.\n\n3) Regarding using only the confidence-gap loss.\nUnfortunately the confidence-gap loss (6) alone would not work. The problem is that (6) is too weak in penalizing mistakes. Consider a hypothetical MNIST neural network that always outputs logit value 1000 for label 0, and logit value 0 for the other nine labels. It would be a useless classifier, yet its (6) loss would be approximately -100 (-1000*0.1+0*0.9), which is lower than a useful classifier.\nThe reviewer might be interested to see what happens if we put more weight on the confidence-gap loss. We reported additional accuracy-robustness trade-off points in the second to last paragraph of Section 3.1. That trade-off curve continues and here is another point with heavier weight on (6): nominal accuracy drops to 97.9% and the robust accuracy (1000-iteration attacks) increases to 24.7%. By the way, in the second to last paragraph of Section 3.3 we stated a hypothesis that this tradeoff is due to fundamentally the same mechanism as the tradeoff shown in Table 6.\n", "\n4) Regarding presentation issues.\nThank you very much for the suggestions and we agree with most. We are making some of the changes and will post a revision soon, and will do more if this paper is accepted and more space is allowed in final version.\nOne thing we want to point out that that L2NNN's 93.1% performance from 75%-random training labels is significantly higher than the best of ordinary networks, see Tables 5,7,8.\nOur measurements of L_inf defense of Madry et al. (2017) are close to those reported in their paper. Because we use the same L_inf epsilon values, the numbers in Table 4 can be directly compared against those reported in Madry et al. (2017), Kolter & Wong (2017) and Raghunathan et al. (2018). As we acknowledged at end of section 3.1, Madry et al.'s MNIST L_inf result is still the best, while for CIFAR-10 we are on par.\n\n5) We agree with the reviewer that pointing out that MNIST is not a solved problem is important to the field as well.\nAnd we thank the reviewer for appreciating that L2NNNs have an easily accessible measure of robustness.\n\nPlease let us know if we have missed anything and we'd be happy to continue the discussion.\n", "We thank the reviewer for the thoughtful review and helpful suggestions, especially for appreciating the scrambled-label experiments. We are updating the paper to incorporate some of the suggestions and will post a revision soon. In the responses below, related points are grouped together and ordered roughly by significance.\n\nBefore going into the list, we wish to emphasize that this paper sets a new state of the art in adversarial defense. Currently in the field, the attack side has an upper hand over the defense side, and indeed there has not been a defense that is practically significant, as the reviewer put it, from the perspective of real-life applications. However, if and when that happens, there will be wide implications across most deep-learning applications in terms of both security and generalization. That is more reason to look for advances in defense, and our paper represents a big step forward.\n\n1) Regarding the significance of our defense results and types of attacks.\nLet us put our defense results in context. The white-box non-targeted scenario is the easiest for attacker and the most difficult for defense. White-box means that the attacker has complete information of a classifier, i.e. its architecture and parameters. By definition, if a classifier achieves a certain degree of white-box defense, its defense in black-box or transfer-attack scenarios can only be higher. Non-targeted means that any misclassification is considered a successful attack, while targeted attacks must reach a certain label. If a classifier achieves a certain degree of defense against non-targeted attacks, its defense in targeted scenario can only be higher. Therefore, white-box non-targeted defense is the holy grail of defense research, as it subsumes other types, and that's what we focus on.\nThen there is the choice of how to quantify noise, and the consensus in the field seems to be L2 norm or L_inf norm, preferably both. This choice leads to two measurements, defense against L2-bounded attacks and defense against L_inf-bounded attacks.\nWhite-box defense has been an elusive goal and numerous defense proposals have failed. Before our work, adversarial training has been considered the mainstream approach and Madry et al. (2017) has been the state of the art.\nThis paper sets a new state of the art for defense against white-box non-targeted L2-bounded attacks. The reviewer commented that it is a very specific attack type, and we want to point out this type subsumes all other L2-bounded types. At the same time, L2NNNs also exhibit, in Table 4, near-state-of-the-art defense against white-box non-targeted L_inf-bounded attacks, which subsumes all other L_inf-bounded types. It is absolutely true that we cannot put a 24.4%-robust classifier in a self-driving car and declare mission accomplished. However, L2NNNs produce better defense than all other methods that are known to the field, and our results point to a different direction than what people thought as the mainstream approach.\nThe degree of interest in our results can be felt by the number of non-reviewer comments we get, and one commenter has kindly tested our models, please see the comment titled \"Very well done evaluation\". We argue that that is side evidence that our results represents meaningful development.\nWe agree with the reviewer that a better introduction would make this paper more accessible to readers outside the subfield of adversarial attack and defense, and we will improve on that if this paper gets accepted and more space is allowed in final version.\n\n2) Regarding L2NNN as a general regularization technique beyond adversarial defense.\nWe thank the reviewer for the appreciation, and we ourselves are proud of our scrambled-label results. The results also provide a partial answer to the questions posed by Zhang et al. (2017) (best paper award ICLR 2017) which reported that no traditional regularization techniques seem to stop neural networks from memorizing random labels. Our results suggest that L2NNN is one regularization technique that can suppress memorization in exchange for stronger generalization.\nWe agree strongly with the reviewer that L2NNN as regularization has wider potentials outside adversarial defense. This indeed warrants a comprehensive study, which we will pursue in future works. For this paper, adversarial defense is our main results, and we want to scratch the surface for L2NNN's other properties.\n", "We thank the reviewer for the thoughtful review and helpful suggestions. We are updating the paper to incorporate some of the suggestions and will post a revision soon. In the responses below, related points are grouped together and ordered roughly by significance.\n\nBefore going into the list, we wish to emphasize that this paper sets a new state of the art in adversarial defense. For security and for generalization, robustness in terms of L2 norm and L_inf norm are both important. As the reviewer pointed out, notable defense progresses in the field so far have been mostly against L_inf-bounded attacks, except for some results in Madry et al. (2017). L2 defense is a less understood, and perhaps more difficult, problem than L_inf. Since both attack types are equally valid and there has been less advances on L2 defense, that makes any work in that area more important, and our paper represents a big step forward.\n\n1) Regarding the loss function.\nThe reason that (4) can express cross-entropy loss of an ordinary network is the following. Given any ordinary ReLU network without weight regularization, pick one layer, if we divide the weight matrix of this layer by a constant c, and divide the bias vectors of this layer and all subsequent layers by the same c, and we multiply the final logits by the same c, then there would no change in the end-to-end behavior of this network. The only change from the above is that the internal activations from that layer on are all scaled by 1/c. If we do the above for all layers and choose c=sqrt(b(W)) for each layer, where b(W) is from equation(2), we can convert the initial ordinary network to a nonexpansive network, only now with extra multipliers on the logits. After considering split layers (even when there is no split layers, we treat the last linear layer as a split layer, see first paragraph of Section 2.4), the multipliers on each logit become different. Therefore, the cross-entropy loss of the initial ordinary network is equal to term (4) of the nonexpansive network with proper u_1,...u_K values.\nThe average in equation (6) is averaging over a batch.\n\n2) Regarding L2 robustness and L_inf robustness.\nAs the reviewer kindly pointed out, we report defense results against both L2-bounded attacks and L_inf-bounded attacks. For L2, L2NNNs set a new state of the art in Tables 1 and 2. At the same time, L2NNNs exhibit, in Table 4, near-state-of-the-art defense against L_inf-bounded attacks.\nIt is true that Model 2's, from Madry et al. (2017), were trained with an L_inf adversary. However, let us quote from Madry et al. (2017): \"our MNIST model retains significant resistance to L2-norm-bounded perturbations too -- it has quite good accuracy in this regime even for epsilon=4.5.\" and \"our networks are very robust, achieving high accuracy for a wide range of powerful adversaries ...\" In other words, Madry et al. do not see the use of L_inf attacker in training as a limiting factor to L2 defense.\nWe are not aware of any published defense results that beat Madry et al. (2017) as measured by any norm. Please see also Athalye et al. (2018) for a comparison between Madry et al. (2017) and a set of other defense works. We are also not aware of any published MNIST or CIFAR models that were trained with an L2 adversary and achieved sizable white-box defense, and we ourselves have not found an efficient way to train with an L2 adversary.\nAnother fact to consider is that our Model 4's were trained with the same L_inf attacker and that improved L2 robustness as reported in Tables 1 and 2.\n\n3) Regarding confidence gap and robustness bound.\nThe reviewer is correct that we could have chosen to report provable robustness rather than measured robustness, by using the noise bound guarantee provided by confidence gaps. If we had chosen a smaller L2 epsilon, say 2 for MNIST, our Model 3 has a provable robustness of 17.0%; if we chose 1.5, our Model 3 has a provable robustness of 46.5%.\nThe reason that we chose measured robustness can be seen on examples in Figure 3. For each of the images, the guaranteed bound on noise L2-norm is half of the gap value, yet in reality the noise magnitude needed is much larger, 1.5X to 2X larger, than the guarantee. The reason is that the true noise bound is a function of local Lipschitz constants, as pointed out by Hein & Andriushchenko (2017), and local Lipschitz constants can be substantially below 1 in our models. The true bound is prohibitive to compute except for very small networks.\nTherefore, in order to demonstrate our defense with a more meaningful L2 epsilon of 3 and also compete with Madry et al. (2017) on the larger epsilon, we chose to report measured robustness.\n", "4) Regarding the robustness-accuracy tradeoff.\nThere is indeed a robustness versus nominal accuracy trade-off. We reported the trade-off in the second to last paragraph of Section 3.1, and we revisited the topic in the second to last paragraph of Section 3.3 to state a hypothesis.\nAs the reviewer pointed out, we are not the only defense work that face this trade-off. It remains an open question whether such trade-off is a necessary part of life.\nOur hypothesis on L2NNN's trade-off is stated at end of Section 3.3: by having a second goal of enlarging confidence gaps, L2NNN's parameter training automatically and selectively misclassify certain training data in exchange for larger gaps. In the context of original training labels, this implies that some original labels are ignored and that leads to lower nominal accuracy. Although by looking at examples in Figure 4, one could argue that some of the original labels are better ignored. If this hypothesis is true, this trade-off mechanism is a double-edged sword, as it both costs us nominal accuracy in Tables 1 and 2 and helps us in dealing with noisy data in Table 5. This is only a hypothesis, and we may have a better answer in future work.\n\n5) Regarding max-margin training.\nThank you very much for the suggestion. Do you mean this paper http://www.jmlr.org/papers/volume10/xu09b/xu09b.pdf? Please advise. We will study the connection for future work, and we also want to see if it is appropriate to cite in this paper.\n\nPlease let us know if we have missed anything and we'd be happy to continue the discussion.\n", "3) Regarding the robustness-accuracy trade-off.\nThere is indeed a robustness versus nominal accuracy trade-off. We reported the trade-off in the second to last paragraph of Section 3.1, and we revisited the topic in the second to last paragraph of Section 3.3 to state a hypothesis.\nWe are not the only defense work that face this trade-off. As AnonReviewer2 pointed out, adversarial training has a similar trade-off. It also can be seen in the adversarial polytope work of https://arxiv.org/abs/1805.12514. It remains an open question whether such trade-off is a necessary part of life.\nOur hypothesis on L2NNN's trade-off is stated at end of Section 3.3: by having a second goal of enlarging confidence gap, L2NNN's parameter training automatically and selectively misclassify certain training data in exchange for larger gaps. In the context of original training labels, this implies that some original labels are ignored and that leads to lower nominal accuracy. Although by looking at examples in Figure 4, one could argue that some of the original labels are better ignored. This is only a hypothesis. We agree with the reviewer that this trade-off is an important subject to study, we may have a better answer in future work.\n\n4) Regarding omitted proofs.\nWe thank the reviewer for the suggestions and we are updating the appendix to add proofs, and will post the revision soon.\n\n5) Regarding loss terms (4) and (5).\nRemoving one of these two terms would not result in as much degradation as in Table 3. If to choose one of the two, it makes sense to use (5) and the end result would be slight degradation in nominal accuracy compared with the current results in Table 1 and 2.\n\n6) Regarding architecture.\nOur models 3 and 4 in Table 1 and 2 all use convolution layers followed by fully connected layers, some of which are split layers with stacks dedicated to individual logits. These are conventional architecture choices, and our unconventional elements are two-sided ReLU and norm-pooling. By the way, they are all available for download at the dropbox link on page 4. \nFor the scrambled-label experiments, as detailed in Tables 7 and 8, we want to be fair and build ordinary networks with two different architectures, one shallow and one deep. Then the ordinary-network section of Table 5 is entry-wise max of Tables 7 and 8. The L2NNNs use the same architecture for MNIST throughout this paper.\n\n7) Regarding hybrid models reported at end of Section 3.2.\nThe following are measurements of the said hybrid models under same settings in Tables 1 and 2, after 1000 iterations: MNIST 62.9%, CIFAR-10 6.4%. Please note that the base for comparison is Models 3 in Tables 1 and 2. The CIFAR-10 number is in line with expectation, and 6.4% is a degradation from 10.1%. The MNIST number, however, is an artifact of that the CW attack code was not designed for ensemble models and Carlini & Wagner can likely do much better if they know and take advantage of the hybrid mechanism. The real MNIST number ought to be somewhat below 20.1%.\n\n8) Regarding missing reference.\nThank you and we will added the reference.\n\nPlease let us know if we have missed anything and we'd be happy to continue the discussion.\n", "I don't see how these two papers are related at all: the former is about GANs, and the second is talking about increasing accuracy, and doesn't mention robustness at all.\n\nMiyato, et al. Spectral Normalization for Generative Adversarial Networks. ICLR’18.\n\nSedghi, et al. The singular values of convolutional layers. Arxiv, 1805.10408.", "How does your work relate to the following papers?\n\nMiyato, et al. Spectral Normalization for Generative Adversarial Networks. ICLR’18.\n\nTsuzuku, et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks. NIPS’18.\n\nK. Scaman and A. Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient estimation. NIPS’18.\n\nGouk, et al. Regularisation of Neural Networks by Enforcing Lipschitz Continuity. Arxiv, 1804.04368.\n\nSedghi, et al. The singular values of convolutional layers. Arxiv, 1805.10408.\n", "I read this paper with some excitement. The authors propose a very sensible idea: simultaneously maximizing the confidence gap and constraining the Lipschitz constant of the network, thus achieving a guarantee that no L2-bounded perturbation can alter the prediction so long as the perturbation is bounded by some function of the confidence gap. \n\nThe main idea consists of three parts:\n 1) smooth networks (fixed, low Lipschitz constant)\n (2) loss function that explicitly maximizes the confidence gap (distance between largest and second-largest logits). \n (3) “the network architecture restricts confidence gaps as little as possible. We will elaborate.” \n\nThe first two conditions make plain sense. The third condition and subsequent elaborations are far too vague. What precisely is the property of restricting confidence gaps? At first glance this seems akin to the smoothness sought in property one. Even in the bulleted list, the authors owe the reader a clearer explanation.\n\nThe proposed model, denoted L2-nonexpansive neural networks (L2NNNs) and consists of a sensible form of Lipschitz-constant-enforcing weight regularization, a loss function that penalizes the confidence gap.\n\nTo address the third condition, the authors say only “we adapt various layers in new ways for the third condition, for example norm-pooling and two-sided ReLU, which will be presented later” which is far too vacuous. At this point the reader is exposed to the third condition for the second time and yet it remains shrouded in mystery. The authors should elaborate here and describe what precisely, if anything, this third condition consists of. If it is not rigorously defined but only a heuristic notion, that would be fine, but this should be communicated clearly to the reader. \n\nA following paragraph introduces the notion of “preserving distance”. However, what follows is too informal a discussion, and the rigorous definition never materializes. The authors say in one place “a network that maximizes confidence gaps well must be one that preserves distance well”. In this case, why do we need the third condition at all if the second condition appears to be sufficient?\n\nIn the next sections the authors describe the methods in greater detail and summarize their results. I have placed some more specific nittier comments in the ***small issues*** section below. But comment hear on the empirical findings.\n\nOne undersold finding here is that the existing methods (including the widely-believed-to-be-robust method due to MAdry 2017) that appear robust under FGSM attacks break badly under iterated attacks, and that the attacks go stronger up to 1M iterations, bringing accuracy below 10%. \n\nIn contrast the proposed method reaches 24% accuracy, which isn’t magnificent, but does appear to outperform the model due to Madry. A comparison against the method due to Kolter & Wong seems in order. The authors do not implement methods based on the adversarial polytope due to their present un-scalability, but that argument would be better supported if the authors were addressing larger models on harder datasets (vs MNIST and CIFAR10).\n\nIn short, I like the main ideas in this paper although some more empirical elbow grease is in order, the third condition needs to be discussed more rigorously or discarded. Additionally the choice of loss function should be better justified. Why do we need the original cross-entropy objective at all. Why not directly optimize the confidence gap? Did the authors try this? Did it work? Apologies if I missed this detail. Overall, I am interested in the development of this paper and would like to give it a higher vote but believe the authors have a bit more work to do to make this an easier decision. Looking forward to reading the rebuttal.\n\n\n***Small issues***\nPage 1 “nonexpansive neural networks (L2NNN)” for agreement on pluralization, should be “L2NNNs”\n\n“They generalize better from noisy training labels than ordinary networks: for example, when 75% of MNIST training labels are randomized, an L2NNN still achieves 93.1% accuracy on the test set”\nWhen you make a claim about accuracy of a proposed model, it must be made in reference to a standard model, even in the intro. It’s well-known in general that DNNs perform well even under large amounts of label noise. Hard to say without reference if 93.1% represents a significant improvement.\n\nRepeated phrase on page 2:\n“How to adapt subtleties like recursion and splitting-reconvergence is included in the appendix.”\n“Discussions on splitting-reconvergence, recursion and normalization are in the appendix.”\n\nInputs to softmax cross-entropy should be both a set of logits and the label -- here the way the function is used in the notation does not match the proper function signature\n\nFigure --- do not put “Model1, Model2, Model3, Model4”. This is unreadable. Put some shortname and then define it in the caption. Once one knows the abbreviations, they should be able to look at the figure and understand it without constantly referencing the caption. \n\nTable 1-4 should be at the top of the page and arranged in a grid. This wrapfigure floating in the middle of the page, while purely a cosmetic issue that should not bear on our deliberations, tortures the template unnecessarily, turning the middle 80% of page 5 into a one-column page unnecessarily.\n\nTable 4 should show comparison to Madry model. Also this is why you need a shortname in the legend. In order to understand table 4, the reader has to consult the caption for tables 1 and 2. \n\n“It is an important property that an L2NNN has an easily accessible measurement on how robust its decisions are”\nI AGREE!", "It turns out that comparing logit-to-image gradients across classifiers is harder than we thought. The issue is that logits need to be scaled properly to have a meaningful comparison of gradient magnitudes, and there does not seem to be a rigorous way to do so. However, we do think that the hypothesis stated above is plausible.", "Thank you very much for the comments and for trying out our models.\n\nIt is an excellent question and one we've wondered about. Our speculation is indeed that adversarial training makes gradient descent harder, as it has an effect of flattening gradients around training data points. Madry et al. (2017) is the best we are aware of, and for the MNIST classifier, their adversarial training was so successful that we suspect that gradients are near zero in some parts of the input space, and hence it takes more iterations for an attacker to make progress. In other words, we suspect that, within many linear sections of their ReLU network, the logits have nearly flat values. The results suggest that adversarial training alone does not achieve full coverage around original-image points, and linear sections with large gradients still exist and hence bad points do exist nearby. It becomes a question of after how many steps does an attacker guided by gradient descent stumble close enough to one bad point. By the trend in Table 1, it would not be surprising if 10 million iterations would knock the accuracy down further.\n\nActually we are intrigued and will do some gradient measurements which might put some numbers behind the speculation.\n", "This paper has a thorough and well done evaluation section. I have one question about your evaluation: while it's not directly related to your paper, I wonder if you have insights about why the model from Madry et al. (2018) appears to continuously get worse as the number of iterations increases (even up to a million iterations!). I would have expected that the number of iterations wouldn't need to get this large. Do you think the model is somehow making gradient descent harder?\n\n(Also: I would like to thank the authors for releasing their pre-trained models. I was able to download and evaluate these models, and so far haven't been able to reduce the robustness claims made in the paper.)", "Hi Robin,\n\nThank you very much for the reference and we will cite it in Section 2.2.\n\nIt is interesting that what we call two-sided ReLU has shown values outside of the scope of adversarial robustness. Perhaps between our paper and the one you pointed out, it will become more accepted. There are a couple of differences which add to the synergy. We use two-sided ReLU for a different purpose (preserving distance for better robustness) and hence do not limit it to just convolution layers. We also propose a generalized scheme in Section 2.2 which can convert nonlinearities other than ReLU to two-sided forms which are nonexpansive and preserve distance better than the original nonlinearities.\n", "I wanted to make the authors aware that the proposal in section 2.2:\n\"We propose two-sided ReLU which is a function from R to R^2 and simply computes ReLU(x) and ReLU(-x).\"\n\nHad been proposed before as Concatenated ReLUs: \"Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units\" https://arxiv.org/abs/1603.05201\nIt seems people also try this scheme on ELUs already:\nhttps://github.com/openai/weightnorm/blob/dff0cd132e9c6e0a31b76cb243d47a07e0c453cc/tensorflow/nn.py#L12-L15", "Thanks for the comment and we're happy to clarify.\n\nThe commenter's thought experiment is correct. However, the conclusion is not about Lipschitz smoothness, but rather about all classifiers, including humans. Indeed it is impossible for a human to have 100% accuracy on clean MNIST images and at the same time 100% after-attack accuracy with noise L2 norm limit of more than 2.83/2 = 1.42. Because there exists a point in the input space that is 1.42 distance away from an image of 4 and also 1.42 distance away from an image of 9.\n\nTherefore, we would like to rephrase the commenter's question to the following. What is a reasonable goal for a robust classifier? The answer, in our opinion, is one that mimics a reasonable human. A human can have at or near 100% accuracy on clean MNIST images, but would have a different degree of confidence for each individual image. For the said two images of 4 and 9, his/her confidence should be low, -- using our terminology in the paper, confidence gap would be less than 1.42*sqrt(2), -- while for the vast majority of MNIST images, his/her confidence should be high. Consequently his/her after-attack accuracy is less than 100%, which is perfectly fine. It's not the fault of the classifier but the fault of certain ambiguously written digits.\n\nIn this paper, we evaluate MNIST classifiers with noise L2 norm limit of 3. This implies that an attacker can modify nine pixels from pure white to pure black or vice versa, and it can modify more pixels with less swings. We believe that a human would have enough confidence to defend against this noise magnitude for the vast majority of MNIST images. If we were to speculate, a human would have after-attack accuracy of over 95%. That's the goal in our opinion and not 100% after-attack accuracy.\n\nBefore our work, the state of the art is 7.6% or less, as shown in Table 1. We advance that to 24.4%. Although this is still a far cry from 95%, it is a big step up from 7.6%, and is better than all existing techniques as far as we know.\n\nBy the way, the commenter's thought experiment is much related to why L2NNNs generalize well from noisy data with partially random labels. Please see Section 3.3, and in particular Table 6.\n\nPlease also see the second paragraph on page 2 about preserving distance. The noise with norm 4.8 that the commenter mentioned is an example of a distance that is likely lost, while the distance of 2.83 mentioned is an example of a distance that we want to preserve through an L2NNN.\n", "Please see Table 4 on page 6. For L_inf defense, we are on par with Madry et al. (2017) for CIFAR-10, and Madry et al. (2017) is better on MNIST.\n\nIt seems that the commenter may have misread Madry et al. (2017). They use L_inf epsilon of 0.3 for MNIST and 8/256 for CIFAR-10, not 0.3 for both.\n\nWe disagree with how the commenter translates L_inf epsilon to L2 epsilon. As show in Table 1, Model 2 can barely defend against L2 epsilon of 3, not to mention 8.4. For individual examples, please see Figures 1 and 5.", "Thanks for the comment and we're happy to clarify.\n\nThe difference is not in architecture, but rather in how to regularize the last linear layer: we recommend treating the last layer as K independent filters rather than regularizing it as a single matrix. In a single-L2NNN classifier, the K logits become related to each other: for example, if one activation in the second-last layer increases by 1, with all other activations staying the same, the K logits as a vector can only change up to L2 norm of 1. In contrast, in a multi-L2NNN approach, each individual logit can increase or decrease up to 1.\n\nWe empirically observe that the multi-L2NNN approach results in better robustness than viewing the whole classifier as a single L2NNN.\n\nWe also empirically observe that having split layers, i.e., final separate stacks of layers where each stack computes a single logit, helps improve the performance. In the multi-L2NNN approach, each such stack is covered by one of the K L2NNNs.", "Thanks for the comment and we're happy to clarify.\n\nModel 2's in Tables 1 and 2 were downloaded from Madry et al.'s GitHub pages (links in footnote on page 4) and were fetched under name \"secret\": these were released after they closed the black-box leaderboards and match what were reported in their paper.\n\nIt is true that Model 2's were trained with L_inf adversary. However, let us quote from Madry et al. (2017): \"our MNIST model retains significant resistance to L2-norm-bounded perturbations too -- it has quite good accuracy in this regime even for epsilon=4.5.\" and \"our networks are very robust, achieving high accuracy for a wide range of powerful adversaries ...\" In other words, Madry et al. do not see the use of L_inf attacker in training as a limiting factor to L2 defense.\n\nWe are not aware of any published defense results that beat Madry et al. (2017) as measured by any norm. Please see also Athalye et al. (2018) for a comparison between Madry et al. (2017) and a set of other defense works. We are also not aware of any published MNIST or CIFAR models that were trained with L2 adversary and achieved sizable white-box defense.\n\nAnother fact to consider is that our Model 4's were trained with the same L_inf attacker and that improved L2 robustness as reported in Tables 1 and 2.", "It seems to me that it's not possible to achieve robustness against adversarial examples using Lipschitz smoothness, without also losing the ability to classify clean data. This claim is problem-dependent, but applies to the MNIST dataset used in this paper.\n\nSee https://arxiv.org/pdf/1806.04169.pdf Fig 18.\nThis figure shows that there is an MNIST 4 and an MNIST 9 that are only L2 distance 2.83 apart. The figure also shows a clean 4 and a mildly noisy 4 separated by small random noise with L2 norm 4.8. If we want the classifier to be so L2-smooth that it is guaranteed to assign the same class to the clean 4 and slightly noisy 4, then it must also assign the same class to the 4 and the 9. In other words, we'd like the function to be smooth in the vicinity of the 4 and smooth in the vicinity of the 9, but somewhere between the 4 and the 9 it needs to be significantly less smooth.\n\nDo you know of a way around this problem?", "Does this paper contain an evaluation of the model in the max norm threat model? I haven't been able to find one. My guess is that this model does not work particularly well on standard benchmarks. Madry et al 2017 evaluate using a max norm ball of size .3. The largest L2 perturbation that fits within this max norm ball has size sqrt(.3^2*784) = 8.4. The evaluations in this paper use L2 norm with size 1.5. My guess is that the method proposed here can't scale to the size used by previous work.", "Section 2.4 says \"For a classifier with K labels, we recommend building it as K overlapping L2NNNs, each of which outputs a single logit for one label. In an architecture with no split layers, this simply implies that these K L2NNNs share all but the last linear layer and that the last linear layer is decomposed into K single-output linear filters, one in each L2NNN\". How is this different from a normal neural network that uses a matrix with K columns for the final output layer?", "Section 3.1 compares to the classifier from Madry et al 2017 as state of the art. The classifier from Madry et al was trained to resist L_infty perturbations but this work studies L_2 perturbations. Is your Model2 in Table 1 a checkpoint downloaded from Madry et al's website (trained to resist L_infty) or a version of their model that has been retrained to resist L_2? Sorry if this is already explained in the paper somewhere." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BkeP1htbxN", "H1ezI47-gV", "BJg5F5RxgV", "HygKL1jgg4", "HyeXrBUggN", "Syx6Z5egeN", "BJgb-yNJgN", "S1xf3ovhpm", "Bklz239KCX", "ryeApYUEpX", "ryxW1EcRRQ", "iclr_2019_ByxGSsR9FQ", "B1gzYu84p7", "HJgJxX5TRX", "r1e3-5Hh0m", "B1xLfvQnRm", "HkxISreiCm", "Bklz239KCX", "iclr_2019_ByxGSsR9FQ", "iclr_2019_ByxGSsR9FQ", "HylOZ5n3pX", "S1xf3ovhpm", "iclr_2019_ByxGSsR9FQ", "BJx0pEqh67", "S1lq-Lwn67", "BkxAVy9sTm", "SJgHFyIip7", "SyxZxI6op7", "rJgJYHXjTX", "H1xEPUkq6Q", "B1xJuFmipm", "iclr_2019_ByxGSsR9FQ", "HygXxYFcT7", "iclr_2019_ByxGSsR9FQ", "ryxxv85_Tm", "SJxA1llFTQ", "r1e8nj0STm", "HJgX7SEUT7", "ryxxv85_Tm", "HJgX7SEUT7", "r1lUDfqmam", "r1e8nj0STm", "Syg9SDL4a7", "B1lLWoxqnQ", "B1lLWoxqnQ", "H1eKGidtnm", "Syg_IW86hm", "Syg_IW86hm", "H1eKGidtnm", "r1lUDfqmam", "iclr_2019_ByxGSsR9FQ", "iclr_2019_ByxGSsR9FQ", "HygkWS4I3m", "H1xLD-frhX", "iclr_2019_ByxGSsR9FQ", "SJgOaOm0qX", "iclr_2019_ByxGSsR9FQ", "S1ekOTSbc7", "BJgI-hH-9Q", "rkgisqr-9Q", "BJl1rFr-9X", "iclr_2019_ByxGSsR9FQ", "iclr_2019_ByxGSsR9FQ", "iclr_2019_ByxGSsR9FQ", "iclr_2019_ByxGSsR9FQ" ]
iclr_2019_ByxPYjC5KQ
Improving Generalization and Stability of Generative Adversarial Networks
Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis.
accepted-poster-papers
The paper received unanimous accept over reviewers (7,7,6), hence proposed as definite accept.
train
[ "SklRxguMJE", "Bkgv8HHx3Q", "S1xy_MQnCQ", "S1xjW_viAX", "rJex_oBqCX", "rkxFtYUtCQ", "rJe8HnEFC7", "ByeSlr4gR7", "HJgkLVVlCX", "BJgwGQVxRQ", "HJeB6kCJpQ", "Skgb5H01TQ", "H1gwUJ0yTQ", "rJg7SvH5hQ", "S1lCpY4dn7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "public", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for increasing the rating. We have performed another experiment to compare the generalization of GANs. The detail of the experiment is as follows:\n\n1. Experiment setup \nDataset: We stack 3 MNIST images into an RGB image, resulting in a dataset of 1000 major modes. \nNetwork architecture: Generator and Discriminator/Critic are 2 hidden layer MLPs with 512 hidden neurons.\nA CNN classifier classifies each channel of the generated image into 1 of the 10 classes. The test set accuracy of the classifier is 99%. The classifier allows us to automatically count the number of modes in the model distribution.\nTo evaluate a generator, we generate 100,000 samples and count the number of modes in that generated dataset. On average, each mode contains 100 samples. To counter the inaccuracies in the classifier, a mode is counted as present if there are more than 10 samples fall into that mode.\n\n2. Result\nResults were averaged between 3 runs.\n\nThe number of modes after 10,000 generator iterations:\nGAN-0-GP with lambda = 500, 1 discriminator iterations per generator iteration: 943\nGAN-0-GP with lambda = 500, 5 discriminator iterations per generator iteration: 1000\nGAN-0-GP with lambda = 50, 5 discriminator iterations per generator iteration: 1000\nGAN-0-GP with lambda = 10, 5 discriminator iterations per generator iteration: 984\nWGAN-0-GP with lambda = 500, 5 critic iterations per generator iteration: 1000\nWGAN-0-GP with lambda = 50, 5 critic iterations per generator iteration: 1000\nWGAN-0-GP with lambda = 10, 5 critic iterations per generator iteration: 1000\nWGAN-1-GP with lambda = 500, 5 critic iterations per generator iteration: 847\nWGAN-1-GP with lambda = 50, 5 critic iterations per generator iteration: 996\nWGAN-1-GP with lambda = 10, 5 critic iterations per generator iteration: 1000\n\n0-GP improves the performance of the original GAN as well as WGAN. The penalty weight lambda in WGAN-1-GP (WGAN-GP) is hard to tune. Larger values of lambda make WGAN-1-GP to oscillate. WGAN-0-GP performance is consistent across various settings. It also reaches 1000 modes sooner than WGAN-1-GP.\n\nIncreasing the number of discriminator iterations per generator iteration actually improve the performance of GAN-0-GP. We would like to recall that increasing the number of discriminator iterations hurts the performance of the original GAN and GAN-0-GP-sample as it makes the discriminator overfit to the training samples. \n\nFrom the result, we can conclude that 0-GP helps improving generalization of GAN. ", "Summary: \nThe paper proposes to add to the original GAN (2014) loss a zero-centered gradient penalty as the one defined in the WGAN-GP paper. It also provides an analysis on the mode collapse and lack of stability of classical GANs. The authors compare results using their penalty on a few synthetic examples and on image net dogs generations to results using the classical GAN loss with or without gradient penalties. \n\nPositive points:\nThe paper is interesting to read and well illustrated. \nAn experiment on imagenet illustrates the progress that can be achieved by the proposed penalty.\n\nPoints to improve: \n\nIf I understood correctly, the main contribution resides in the application of the GP proposed by WGAN-GP to the original setting. Why not compare results to WGAN-GP in this case? Since the proposal of GANs, many papers addressed the mode collapse problem. WGAN-GP, VEEGAN, or Lucas et al arXiv:1806.07185, ICML 2018 to name only a few. \nThe related work section looks incomplete with some missing related references as mentioned above, and copy of a segment that appears in the introduction. \nThe submission could maybe improved by segmenting the work into intro / related / background (with clear equations presenting the existing GP) / analysis / approach / experiments\nThe experiments on synthetic data could be improved: for reproducibility, many works on GANs used the same synthetic data as VEEGAN. \nThe imagenet experiment lacks details. ", "I get the authors argument about the optimal results that couldn't be shown in Fig.1 for the Wasserstein 1-GP loss, but I still think this comparison could be interesting, and encourage the authors to add it to the camera ready version if the paper is accepted.\nNonetheless, since the authors included a comparison to Wasserstein-GP on ImageNet, I increased my rating. ", "Thank you for your comment. The encoder E could be trained jointly with D and G using an architecture similar to BiGAN [6]. \n\n[6] Adversarial Feature Learning\nJeff Donahue, Philipp Krähenbühl, Trevor Darrell", "Hi, I read your paper and it think it is quiet well written paper.\nI would like to ask 2 questions about your paper.\n\n1. Additional information about your generating performance\nIn the PDF of your paper in OpenReview, it is hard to compare generation results with other papers.\nIt would be better if the paper contains large resolution results with which resolution it is.\n\n2. About encoder E\nIn the paper, to penalize gradients on points in line segment between Y(t) and X, you proposed linear interpolation between two real and fake latent code. In appendix F, you explained how you get latent code ${z}_{x} using encoder E, however, details of E is not in the paper. How did you train E? It doesn't seem to be pretrained. Is it able to infer ${z}_{x} correctly? If so, how did you schedule the training step? Separate D and E or train at once?\n\nThanks in advance.", "Thank you for pointing out the typos. We have fixed them in our latest revision. We also added a reference to table 1 in our introduction.\nWe would like to address your other concerns as follows:\n\n1. Figure 1 aims to illustrate the effect of our 0-GP in pushing the discriminator toward the theoretically optimal discriminator $D*$ in the original GAN, not the theoretically optimal critic $C*$ in WGAN. \nIf we add WGAN-GP to the comparison in Fig. 1, it should be compared with the optimal critic, not the optimal discriminator in Fig. 1f. However, we are not aware of any closed-form formula for the Wasserstein-1 distance between two multivariate Gaussians. Therefore, WGAN-GP could not be added to Fig. 1. Also note that the critic can output values outside of the [0, 1] range so it is not comparable to discriminators in Fig. 1.\n\nWe also do not see the benefit of adding WGAN-GP to the comparison in Fig. 1. Our work focuses on the use of GP in improving generalization in GAN, not on the use of metric/divergence. As shown in section 5.3, 1-GP does not help to prevent overfitting in the discriminator with the original GAN loss. Similar to the discriminator, the critic in WGAN-GP can also overfit to the data and output a distance greater than the actual Wasserstein-1 (W-1) distance between $p_g$ and $p_r$. For example, $p_g$ and $p_r$ are both standard normal distribution, the W-1 distance between them is 0. However, the critic is trained to maximize the distance between the two empirical distributions can output a value greater than 0 without violating the Lipschitz constraint. More concretely, if the dataset contains a real sample $x$ and a fake sample $y$, the empirically optimal critic satisfies: \n |C(x) - C(y)| = || x - y ||,\nwhile the true distance is 0. The phenomenon makes the critic and the generator to oscillate around the equilibrium [4] and slows down the improvement in sample quality. Note that, WGAN-LP [5] with the corrected GP still suffers from that same problem and performs similarly to WGAN-GP on real datasets. The problem is seen in our ImageNet experiment where GAN-0-GP and GAN-0-GP-sample outperform WGAN-GP by a large margin.\n\n2. We appreciate your suggestion on improving the paper quality. We will add more experiments if time permits. However, we believe that the current experiments are sufficient to demonstrate our main contribution which is a novel method for improving the generalization of GANs.\n\n[4] Which Training Methods for GANs do actually Converge? \nLars Mescheder, Andreas Geiger, Sebastian Nowozin.\n[5] On the regularization of Wasserstein GANs\nHenning Petzka, Asja Fischer, Denis Lukovnicov.\n", "I appreciate the authors effort put in the revised pdf.\n\nI acknowledge the comparison with WGAN-GP in the imageNet experiment.\nThe point that still bothers me in the main results of the paper (generalization) is the lack of comparison with WGAN-GP in Figure 1: the comparison with 1GP is displayed, but this is with a classical GAN objective, and not a WGAN loss. My insight is that the results should be better using the full WGAN-GP objective. Sorry for the last minute request, but this doesn't seem like a difficult comparison to make. \n\nMinor but still important to reader understanding:\n- Table 1 is nice but is not referenced in the text. The caption is not very explicit either. A ref should be added for the 0-GP sample (name only introduced in section 5). For the 1-GP a ref to WGAN-GP could be added too with precisions on the full criterion in the caption.\n- Probably will lack of time, but an interesting experiments for assessing GAN generalization could be to measure the Mean reconstruction error as in\nGuo-Jun Qi. Loss-Sensitive Generative Adversarial Networks, 2017.\n\nOther comments:\np1: typos equilibira\nGAN loss tends -> GAN loss tend\np10: add the dataset in caption of Figure 4\nRemove urls in the bib", "Thank you for your constructive review. We have updated our paper to address your concerns. The changes are summarized as follow:\n\n1. A background section is added with basic information about GANs and a definition of generalization. A table summarizing the referred gradient penalties is also added.\n\n2. We extended the Related works section to include papers which address the mode collapse problem. The writing of this part and the whole paper was revised.\n\n3. Another MNIST experiment is added to Section 6.1 to further demonstrate the effectiveness of our method in preventing overfitting. Specifically, our GAN-0-GP is the only GAN that could learn to generate realistic samples when the discriminator is updated 5 times per generator update. \n\n4. WGAN-GP is included to our ImageNet experiment. Our GAN-0-GP outperforms WGAN-GP by a large margin. \n\n5. Implementation details are added to the appendix. The code for all experiments will be released after the review process.\n\n6. We added the analysis for the 'mode jumping' problem to Section 6.2. We showed that GAN-0-GP-sample suffers from the problem. On the other hand, our GAN-0-GP is robust to the problem and is able to produce better interpolation between modes.\n\n7. A new algorithm for finding a better path between a pair of samples is added to our paper. ", "Thank you for your review and questions. We have performed additional experiments and analysis to consolidate our finding. The updates are as follows:\n\n1. A background section is added with basic information about GANs and a definition of generalization. A table summarizing the referred gradient penalties is also added.\n\n2. We extended the Related works section to include papers which address the mode collapse problem. The writing of this part and the whole paper was revised.\n\n3. Another MNIST experiment is added to Section 6.1 to further demonstrate the effectiveness of our method in preventing overfitting. Specifically, our GAN-0-GP is the only GAN that could learn to generate realistic samples when the discriminator is updated 5 times per generator update. \n\n4. WGAN-GP is included to our ImageNet experiment. Our GAN-0-GP outperforms WGAN-GP by a large margin. \n\n5. Implementation details are added to the appendix. The code for all experiments will be released after the review process.\n\n6. We added the analysis for the 'mode jumping' problem to Section 6.2. We showed that GAN-0-GP-sample suffers from the problem. On the other hand, our GAN-0-GP is robust to the problem and is able to produce better interpolation between modes.\n\n7. A new algorithm for finding a better path between a pair of samples is added to our paper. ", "Thank you again for your suggestions. We have revised our paper to address your concerns as follows:\n\n1. A background section is added with basic information about GANs and a definition of generalization. A table summarizing the referred gradient penalties is also added.\n\n2. We extended the Related works section to include papers which address the mode collapse problem. The writing of this part and the whole paper was revised.\n\n3. Another MNIST experiment is added to Section 6.1 to further demonstrate the effectiveness of our method in preventing overfitting. Specifically, our GAN-0-GP is the only GAN that could learn to generate realistic samples when the discriminator is updated 5 times per generator update. \n\n4. WGAN-GP is included to our ImageNet experiment. Our GAN-0-GP outperforms WGAN-GP by a large margin. \n\n5. Implementation details are added to the appendix. The code for all experiments will be released after the review process.\n\n6. We added the analysis for the 'mode jumping' problem to Section 6.2. We showed that GAN-0-GP-sample suffers from the problem. On the other hand, our GAN-0-GP is robust to the problem and is able to produce better interpolation between modes.\n\n7. A new algorithm for finding a better path between a pair of samples is added to our paper. ", "Thank you for your comments. We would like to address your concerns as follow.\n\n1. We do not use the gradient penalty in WGAN-GP (1-GP) to improve the original GAN. Our 0-GP, although has a similar form as 1-GP, is motivated from a very different perspective and produces very different effects. We assume that you find our 0-GP similar to 1-GP because of the use of the straight line from a fake to a real sample. In the response to reviewer 1, we propose a more sophisticated way to find a path from a fake to a real datapoint. The new method highlights the difference between our method and 1-GP.\n\n2. The 0-GP is not the only contribution of our paper. We start by analyzing the generalization of GANs, showing the problem of the original GAN loss. Although generalizability is one of the most desirable properties of generative models, it has not been studied carefully in GAN literature. Based on our analysis, we propose 0-GP to improve the generalization of GANs. On the 8 Gaussian dataset, GAN-0-GP can generate plausible unseen datapoints on the circle, implying better generalization. We show that the original GAN loss makes GAN focuses on generating datapoints in the training dataset. 0-GP-sample proposed in [4] encourages the generator to remember the training samples. That result in the mode jumping behavior: when we perform interpolation between $z_1$ and $z_2$, the output does not smoothly transform from $x_1 = G(z_1)$ to $x_2 = G(z_2)$ but suddenly jump from $x_1$ to $x_2$. The behavior can be seen in figure 8 of BigGAN paper (https://arxiv.org/abs/1809.11096).\n\n3. We will include WGAN-GP to the baselines for the sake of completeness. However, as discussed in the previous paragraphs and in our paper, WGAN-GP and their 1-GP does not address the same problem as our 0-GP. \n\nAs discussed in our paper, 1-GP does not help improving generalization in GANs. [4] even showed that 1-GP does not help WGAN (and the original GAN as well) to converge to an equilibrium. The phenomenon can be seen in our MNIST experiment where GAN-1-GP fails to produce any realistic samples after 10,000 iterations. It has been observed that WGAN-1-GP does not converge to an equilibrium, the generator continues to map the same noise to different modes as the training continues. In our synthetic experiment, WGAN-1-GP is less robust to change in hyper-parameters than GAN-0-GP. Detailed results will be included in our revision. Please refer to [4] for more in-depth discussion about the non-convergence of WGAN-GP. \n\nWhen $p_g$ is the same as $p_r$, the gradient of the optimal discriminator in GAN and the optimal critic in WGAN must be 0. Any non-zero centered GP will not help GANs to converge to the optimal equilibrium. Our 0-GP helps to improve both generalization and convergence of GANs. Our 0-GP can be applied to WGAN as well. \n\nSimilar to the original GAN, WGAN and WGAN-GP can overfit to the dataset: the distance output by the critic can be larger than the Wasserstein distance between the two distributions. However, overfitting in WGAN and WGAN-GP is not as severe as in GAN. This is partly because the gradient in WGAN and WGAN-GP does not explode so mode collapse is much harder to observe. \n\n4. We will include more related works to our paper. The vast body of work on GANs makes it difficult to find all related works. We only focus on some key papers on the topic.\n\nDiscussion about VEEGAN and Lucas et al. will be added to our next revision. However, we want to emphasize that our work is about improving the generalization of GANs. Reducing mode collapse is related to but is not exactly the same as generalization. As in the 8 Gaussian dataset, a GAN without mode collapse is the one that can generate all 8 modes. A GAN with good generalization should be able to generate unseen datapoints on the circle and to perform smooth interpolation between modes. \n\n5. We will add more details about the experiments to the appendix. The code for all experiments will be released after the review process. For the imagenet experiment, we used the code from [4] which is available on github. We note that [4] is a state-of-the-art method which is able to help GAN to scale to massive datasets and it is used in BigGAN paper. \n\n6.Thank you for your suggestion about the paper layout. Adding a table that summarizes referred gradient penalties is a good idea. \n\n", "Thank you for your review. We will revise our paper according to your suggestion. We would like to quickly address your question about the experiment here. For MNIST and ImageNet experiment, the whole dataset was used. For the ImageNet experiment, we used the code from [4]. Details about all experiments will be added to the appendix. We thank you for pointing the typo in Figure 3. \n\nWe will also add an in-depth discussion about our method and other related works to our next revision as suggested by other reviewers. ", "Thank you for your constructive comments. We would like to address your concerns as follow:\n1. Generalization has been defined in [1, 2, 3]. They were cited in our paper. Because of the space limit, we could not include their definition in the first version of our paper. We will add the definition from [1] to the updated version. The definition in [1] is directly related to our discussion: if the Lipschitz constant is 0, then the network has the maximum generalization capability and no discriminative power. As stated in our paper, our gradient penalty makes the network generalizable while remaining discriminative. \n\n2. We agree that the straight line is not a good option for real data like images. However, it's the cheapest way to implement our method. We are working on an improved version for the GP, which we briefly describe below. We plan to include the result in the next revision of our paper.\n\nFor all interpolated points to be in the same set with the two endpoints, the set must be convex. $supp(p_g) U supp(p_r)$ is generally not convex so linear interpolation in the data space cannot guarantee that every interpolated point to be in the support. A solution to this problem is to force the set of latent code $z$ to be convex and perform the interpolation in the latent space. This requires an additional encoder E to encode a datapoint $x$ to a latent code $z_x$. The process of sampling a datapoint for regularization is as follow\n(i) Sample a noise vector $z ~ p_z$, generate a fake datapoint $y = G(z)$\n(ii) Sample a real datapoint $x ~ p_r$, get the latent code of $z_x = E(x)$\n(iii) Generate the interpolated latent code: $\\tilde(z) = \\alpha z_x + (1 - \\alpha) z$\n(iv) Generate the interpolated datapoint: $\\tilde(x) = G(\\tilde(z))$\n(v) Apply gradient penalty on $\\tilde(x)$\n$\\tilde(x)$ is more likely to lie on the data manifold than the weighted sum of a real and a fake sample. Regularizing the gradient w.r.t. $\\tilde(x)$ will allow better generalization and discrimination.\n\n3. We are not sure that your suggestion is correct. As discussed in our paper, gradient exploding tends to happen near the decision boundary, while the gradient near real/fake datapoints tends to vanish. We doubt that increasing the sampling rate near real/fake datapoints will lead to better result. \n\n\n[1] Generalization and Equilibrium in Generative Adversarial Nets (GANs). Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang.\n[2] Do GANs actually learn the distribution? An empirical study. Sanjeev Arora, Yi Zhang.\n[3] On the Discrimination-Generalization Tradeoff in GANs. Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He.\n[4] Which Training Methods for GANs do actually Converge? Lars Mescheder, Andreas Geiger, Sebastian Nowozin.\n", "The primary innovation of this paper seems focused towards increasing the generalization of GANs, while also maintaining convergence and preventing mode collapse.\n\nThe authors first discuss common pitfalls concerning the generalization capability of discriminators, providing analytical underpinnings for their later experimental results. Specifically, they address the problem of gradient explosion in discriminators. \n\nThe authors then suggest that a zero-centered gradient penalty (0-GP) can be helpful in addressing this issue. 0-GPs are regularly used in GANs, but the authors point out that the purpose is usually to provide convergence, not to increase generalizability. Non-zero centered penalties can give a convergence guarantee but, the authors, assert, can allow overfitting. A 0-GP can give the same guarantees but without allowing overfitting to occur.\n\n\nThe authors then verify these assertions through experimentation on synthetic data, as well as MNIST and ImageNet. My only issue here is that very little information was given about the size of the training sets. Did they use all the samples? Some portion? It is not clear from reading. This would be a serious impediment to reproducibility.\n\nAll in all, however, the authors provide a convincing combination of analysis and experimentation. I believe this paper should be accepted into ICLR.\n\nNote: there is an error on page 9, in Figure 3. The paragraph explanation should list that the authors' 0-GP is figure 3(e). They list (d) twice.\n\n", "The paper discusses the generalization capability of GAN especially from the discriminator's perspective. The explanation is clear and the method is promising. The proposed gradient penalty method that penalizes the unseen samples is novel and reasonable from the explanation, although these methods has been proposed before in different forms. \n\nPros:\n1. Nice explanation of why the training of GAN is not stable and the modes often collapse.\n2. Experiments show that the new 0-gradient penalty method seems promising to improve the generalization capability of GAN and helps to resist mode collapsing.\n\nCons:\n1. The paper does not have a clear definition of the generalization capability of the network.\n2. The straight line segment between real and fake images seems not a good option as the input images may live on low-dimensional manifolds. \n3. Why samples alpha in (7) uniformly? It seems the sampling rate should relate with its value. Intuitively, the closer to the real image the sampling point is, the larger the penalty should be.\n" ]
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "S1xy_MQnCQ", "iclr_2019_ByxPYjC5KQ", "rkxFtYUtCQ", "rJex_oBqCX", "iclr_2019_ByxPYjC5KQ", "rJe8HnEFC7", "BJgwGQVxRQ", "rJg7SvH5hQ", "S1lCpY4dn7", "Bkgv8HHx3Q", "Bkgv8HHx3Q", "rJg7SvH5hQ", "S1lCpY4dn7", "iclr_2019_ByxPYjC5KQ", "iclr_2019_ByxPYjC5KQ" ]
iclr_2019_ByxZX20qFQ
Adaptive Input Representations for Neural Language Modeling
We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.
accepted-poster-papers
There is a clear consensus among the reviews to accept this submission thus I am recommending acceptance. The paper makes a clear, if modest, contribution to language modeling that is likely to be valuable to many other researchers.
train
[ "SJgnkDkMCm", "ryloqL1G0Q", "Hke6aByfA7", "S1ePmKlW0Q", "S1gSrFe-R7", "SkxTKpJwa7", "H1xamJD0hQ", "H1lJ9-Pdn7", "HJxeCIut2Q" ]
[ "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The primary goal of the projections is to project all embeddings into the model dimension d so that we can have variable sized embeddings. Our goal was not to make the model model expressive. Compared to the rest of the model, these projections add very little overhead compared to the rest of the model. Doing without them is an interesting future direction though!", "We thank the reviewer for the comments! \n\nQ: “ADP and ADP-T runtimes were very close on WikiText-103 dataset but very different on Billion Word corpus (Table 3 and 4)”\nThe differences in training time are due to the size of the models: Weight tying saves a lot more parameters for the Billion Word model due to the larger vocab compared to the WikiText-103 models which have a smaller vocab. On WikiText-103, tying saves 15% of parameters (Table 3, ADP vs ADP-T, 291M vs 247M) and training time is reduced by about 13%. On Billion Word, tying saves 27% of parameters (Table 4) and training time is reduced by about 34%. The slight discrepancy may be due to multi-machine training for Billion Word compared to the single machine setup for WikiText-103.\n\nQ1: \"I am curious about what would you get if you use ADP on BPE vocab set?\"\nWe tried adaptive input embeddings with BPE but the results were worse than softmax. This is likely because 'rare' BPE units are in some sense not rare enough compared to a word vocabulary. In that case, the regularization effect of assigning less capacity to 'rare' BPE tokens through adaptive input embeddings is actually harmful.\n\nQ2: \"How much of the perplexity reduction of 8.7 actually come from ADP instead of the transformer and optimization?\"\nFor WikiText-103 (Table 3) we measured 24.92 on test with a full softmax model (a 5.2 PPL improvement over the previous SOTA). This corresponds to a Transformer model including our tuned optimization scheme. Adding tied adaptive input embeddings (ADP-T) to this configuration reduces this perplexity to 20.51, which is another reduction of 4.4 PPL.", "We thank the reviewer for the comments! \n\nQ: “comparing directly to Merity et al.'s approach”\nMerity et al. share the input and output embeddings via an adaptive softmax where all words have the same embedding size. We reimplemented their approach and found that it did not perform very well in our experiments (25.48 PPL; Appendix A, Table 6, last row). We found that sharing fixed size input and output embeddings for a flat softmax performs better (22.63 PPL; second to last row of Table 6). This is likely because we train all words at every time step, which is not the case for an adaptive softmax with fixed size embeddings.\n\nQ: “The discussion/explanation of the differing performance of tying or not tying each part of the embedding weights for the different datasets is confusing”\nWe updated the paper and hope that the discussion is clearer now. Thank you for the feedback!\n\nQ: “thoughts as to why full-softmax BPE is worse than adaptive softmax word level”\nFull-softmax BPE is worse because we measure perplexity on the word-level. This involves multiplying the probabilities of the individual BPE tokens. BPE token-level perplexity itself is actually significantly lower than word-level PPL (around 21.5 for GBW and around 18 for WikiText-103 for the models presented in the paper) but the two are not comparable.\n\n", "We updated the paper with the following changes:\n* Table 3 contains new (better) validation results for WikiText-103. Note that only the validation numbers are updated, the test results were not affected. As described in the paper, we form training examples by taking 512 contiguous words from the training data with no regard for sentence boundaries. Evaluation is the same except that we require blocks to contain complete sentences of up to 512 tokens. Previously reported validation numbers did not always contain complete sentences because samples were built the same way as during training. We have corrected this so that validation is conducted the same way as testing.\n* We also added new (and better) Billion word results with a bigger model achieving 23.7 perplexity.\n* We added a comparison to Merity et al. fixed size adaptive softmax to the Appendix (Table 6).\n* Clarified discussion around tying and not tying projections/word embeddings. \n\n", "We are planning to open source the code and pre-trained models in the future.", "Code and pre-trained models are available at http://anonymized.\n\nit is not available, would you fix it and I am very interested in your paper.", "The authors extend an existing approach to adaptive softmax classifiers used for the output component of neural language models into the input component, once again allowing tying between the embedding and softmax. This fills a significant gap in the language modeling architecture space, and the perplexity results bear out the advantages of combining adaptively-sized representations with weight tying. While the advance is in some sense fairly incremental, the centrality of unsupervised language modeling to modern deep NLP (ELMo, BERT, etc.) implies that perplexity improvements as large as this one may have meaningful downstream effects on performance on other tasks. Some things I noticed:\n\n- One comparison that I believe is missing (I could be misreading the tables) is comparing directly to Merity et al.'s approach (adaptive softmax but fixed embedding/softmax dimension among the bands). Presumably you're faster, but is there a perplexity trade-off?\n\n- The discussion/explanation of the differing performance of tying or not tying each part of the embedding weights for the different datasets is confusing; I think it could benefit from tightening up the wording but mostly I just had to read it a couple times. Perhaps all that's complicated is the distinction between embedding and projection weights; it would definitely be helpful to be as explicit about that as possible upfront.\n\n- The loss by frequency-bin plots are really fantastic. You could also try a scatterplot of log freq vs. average loss by individual word/BPE token.\n\n- Do you have thoughts as to why full-softmax BPE is worse than adaptive softmax word level? That goes against the current (industry) conventional wisdom in machine translation and large-scale language modeling that BPE is solidly better than word-level approaches because it tackles the softmax bottleneck while also sharing morphological information between words.\n", "This article presents experiments on medium- and large-scale language modeling when the ideas of adaptive softmax (Grave et al., 2017) are extended to input representations.\n\nThe article is well written and I find the contribution simple, but interesting. It is a reasonable and well supported increment from adaptive softmax of Grave et al. (2017).\n\nMy question is a bit philosophical: The only thing which I was concerned about when reading the paper is projection of the embeddings back to the d-dimensional space. I understand that for two matrices A and B we have rank(AB) <= min(rank(A), rank(B)), and we are not making the small-sized embeddings richer when backprojecting to R^d, but have you thought about how it would be possible to avoid this step and keep the original variable-size embeddings?\n\nReferences\nJoulin, A., Cissé, M., Grangier, D. and Jégou, H., 2017, July. Efficient softmax approximation for GPUs. In International Conference on Machine Learning (pp. 1302-1310).", "This paper introduced a new architecture for input embeddings of neural language models: adaptive input representation (ADP). ADP allowed a model builder to define a set of bands of input words with different frequency where frequent words have larger embedding size than the others. The embeddings of each band are then projected into the same size. This resulted in lowering the number of parameters. \n\nExtensive experiments with the Transformer LM on WikiText-103 and Billion Word corpus showed that ADP achieved competitive perplexities. While tying weight with the output did not benefit the perplexity, it lowered the runtime significantly on Billion Word corpus. Further analyses showed that ADP gained performance across all word frequency ranges.\n\nOverall, the paper was well-written and the experiments supported the claim. The paper was very clear on its contribution. The variable-size input of this paper was novel as far as I know. However, the method, particularly on the weight sharing, lacked a bit of important background on adaptive softmax. The weight sharing was also needed further investigation and experimental data on sharing different parts.\n\nThe experiments compared several models with different input levels (characters, BPE, and words). The perplexities of the proposed approach were competitive with the character model with an advantage on the training time. However, the runtimes were a bit strange. For example, ADP and ADP-T runtimes were very close on WikiText-103 dataset but very different on Billion Word corpus (Table 3 and 4). The runtime of ADP seemed to lose in term of scaling as well to BPE. Perhaps, the training time was an artifact of multi-GPU training. \n\nQuestions:\n1. I am curious about what would you get if you use ADP on BPE vocab set?\n2. How much of the perplexity reduction of 8.7 actually come from ADP instead of the transformer and optimization?" ]
[ -1, -1, -1, -1, -1, -1, 7, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "H1lJ9-Pdn7", "HJxeCIut2Q", "H1xamJD0hQ", "iclr_2019_ByxZX20qFQ", "SkxTKpJwa7", "iclr_2019_ByxZX20qFQ", "iclr_2019_ByxZX20qFQ", "iclr_2019_ByxZX20qFQ", "iclr_2019_ByxZX20qFQ" ]
iclr_2019_ByxkijC5FQ
Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology
While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data. Measures for characterizing and monitoring structural properties, however, have not been developed. In this work, we propose neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs. To demonstrate the usefulness of our approach, we show that neural persistence reflects best practices developed in the deep learning community such as dropout and batch normalization. Moreover, we derive a neural persistence-based stopping criterion that shortens the training process while achieving comparable accuracies as early stopping based on validation loss.
accepted-poster-papers
The paper presents a topological complexity measure of neural networks based on persistence 0-homology of the weights in each layer. Some lower and upper bounds of the p-norm persistence diagram are derived that leads to normalized persistence metric. The main discovery of such a topological complexity measure is that it leads to a stability-based early stopping criterion without a statistical cross-validation, as well as distinct characterizations on random initialization, batch normalization and drop out. Experiments are conducted with simple networks and MNIST, Fashion-MNIST, CIFAR10, IMDB datasets. The main concerns from the reviewers are that experimental studies are still preliminary and the understanding on the observed interesting phenomenon is premature. The authors make comprehensive responses to the raised questions with new experiments and some reviewers raise the rating. The reviewers all agree that the paper presents a novel study on neural network from an algebraic topology perspective with interesting results that has not been seen before. The paper is thus suggested to be borderline lean accept.
train
[ "Syl--iQzJE", "HygcYhVqnm", "ByxpGah6R7", "BJgk_yOORX", "B1gkIaPO0X", "rket7Pu_Rm", "HJxw_vuu0m", "B1xgWZA02m", "r1x6sUQ6nm", "S1eODZ5u5Q", "BJe77QJ_qX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for this very positive change! We aim to update the discussion concerning Fig. 11 in a revision of the paper.", "The authors, motivated by work in topological graph analysis, introduce a new broadly applicable complexity measure they call neural persistence--essentially a sum over norms of persistence diagrams (objects from the study of persistent homology). The also provide experiments testing their parameter, primarily on MNIST with some work on CIFAR-10.\n\nI'd like to preface my criticism with the following: this work is extremely compelling, and the results and experiments are sound. I'm very interested to see where this goes. Figure 2 is particularly compelling!\n\nThat said, I am extremely suspicious of proposals for measures of generalization which (1) do not make contact with the data distribution being studied, and (2) which are only tested on MNIST and CIFAR-10. Additionally, (3) it is not clear what a \"good\" neural persistence is, a priori, and (4) I'm not entirely sure I agree with the author's assessment of their numerical data.\n\nIn more detail below:\n\n1. At this point, there's a tremendous number of different suggested ways to measure \"generalization\" by applying different norms and bounds and measures from all of the far reaches of mathematics. A new proposed measure **really needs** to demonstrate a clear competitive measure against other candidates. The authors make a strong case that this measure is better than competitors from TGA, but I'm not yet convinced this measure is doing enough legwork. For example, is it possible that a network has high neural persistence, but still has terrible test or validation error? Why or why not? Are there obvious counterexamples? Are there reasons to think those obvious counterexamples aren't like trained neural networks? These are all crucial questions to ask and answer if you want this sort of measure to be taken seriously.\n\n2. Most of your numerical experiments were on MNIST, and MNIST is weird. It's getting to be a joke now in the community that your idea works on MNIST, but breaks once you try to push it to something harder. Even Cifar-10 has its quirks, and observations that are true of some networks absolutely do not generalize to others.\n\n3. While I'm convinced that neural persistence allows you to distinguish between networks trained in different ways, it isn't clear why I should expect a particular neural persistence to mean anything at all w.r.t. validation loss. Are there situations in which the neural persistence has stopped changing, but validation loss is still changing appreciably? Why or why not?\n\n4. I'm concerned that the early stopping procedure used as a benchmark wasn't tuned as carefully as neural persistence was. I also honestly cannot determine anything from Figure 4 except that your \"Fixed\" baseline is bad, and that persistence seems to do about the same as validation loss. It even seems that Training loss is a better early stopping criteria (better than both validation and persistence!) from this plot, because it seems to perform just as well, and systematically stop earlier. Am I reading this plot right (particularly for 1.0 fraction MNIST)?\n\n\nThis work currently seems like a strong candidate for the workshop track. I would have difficulty raising my score above much more than a 6 without much more numerical data, and analysis of when the measure fails.\n\nEdit: The authors have made a significant effort to address my concerns, and I'm updating my score to 7 from 5 in response.", "I thank the authors for their extensive work to address my concerns. I've updated my score to a 7 in response.\n\nOne small concern is that I'm not sure the claims regarding Fig. 11 are entirely justifiable given the fairly large error bars. This is a comparatively minor point, but softening the language / more empirical data to reduce the error bars would be great.", "We would like to thank the reviewers for their valuable insights and remarks that we address individually below. We significantly extended the paper and the supplementary materials, focusing particularly (as suggested by reviewers 1 and 3) on providing a thorough analysis and evaluation of our early stopping criterion. Moreover, as recommended by R1, we now discuss additional data sets. Due to the requested changes, we updated Section 4.2 and show the ‘Fashion-MNIST’ data set (in the main paper), while describing results for other data sets in the appendix.\n\nTo summarize the changes:\n\n- In Section 4.2, we conducted a detailed analysis of our early stopping criterion for different sets of hyperparameters and different data sets (three image classification data sets: MNIST, CIFAR-10, Fashion-MNIST, and one text classification dataset: IMDB). Our stopping criterion generalizes well to other situations and is competitive with validation loss-based stopping criteria.\n\n- We extended the theoretical section (Section A.4) to include details of normalized neural persistence computation on convolutional layers. Moreover, we describe preliminary experiments about early stopping based on convolutional layers.\n\n- We describe properties and limitations of the measure (Section A.5): initialization of networks with high neural persistence does not, as expected, correlate with higher accuracy, for example.\n\n- We describe the behaviour of neural persistence for deep architectures as well as its\nrelationship with the learned data distribution (Section A.6 in the appendix).\n\n- We extended the discussion of early stopping in scenarios with scarce data to show that our measure behaves as expected, i.e. it stops earlier when overfitting can occur, and it stops later when longer training is beneficial (Section A.7).\n\nIndividual answers to your review:\n\nWe’d like to address the four points you raised as follows:\n\n> At this point, there's a tremendous number of different suggested ways to measure \"generalization\" [...]. A new proposed measure **really needs** to demonstrate a clear competitive measure against other candidates. The authors make a strong case that this measure is better than competitors from TGA, but I'm not yet convinced this measure is doing enough legwork. For example, is it possible that a network has high neural persistence, but still has terrible test or validation error? Why or why not? Are there obvious counterexamples? [...]\n\nFirst, Section A.6 shows that neural persistence makes contact with the data distribution. We also extended the evaluation: Section A.5 analyses situations where high neural persistence and terrible validation error co-occur. In brief, these situations can only occur after artificially initializing the network with high neural persistence. Section A.7 shows the legwork: Compared to validation loss, neural persistence stops earlier when overfitting can occur and it stops later when longer training is beneficial.\n\n> Most of your numerical experiments were on MNIST, and MNIST is weird. [...]\n\nWe acknowledge your concerns, so we added another image (Fashion-MNIST) and a text classification data set (IMDB Large Movie Review Dataset). Figure 4 and Section A.3 show a quantitative summary over all data sets and networks, including CNNs. We observed that our early stopping criterion is competitive in all scenarios.\n\n> While I'm convinced that neural persistence allows you to distinguish between networks trained in different ways, it isn't clear why I should expect a particular neural persistence to mean anything at all w.r.t. validation loss. Are there situations in which the neural persistence has stopped changing, but validation loss is still changing appreciably? Why or why not?\n\nA “good” neural persistence is not defined a priori, and this is why we couple the stabilization of our measure with the training procedure. Figure A.12, rhs shows that for noisy labels, neural persistence stabilizes soon where validation loss is still decreasing.\n\n> I'm concerned that the early stopping procedure used as a benchmark wasn't tuned as carefully as neural persistence was. I also honestly cannot determine anything from Figure 4 except that your \"Fixed\" baseline is bad, and that persistence seems to do about the same as validation loss. [...] Am I reading this plot right (particularly for 1.0 fraction MNIST)?\n\nFor a qualitative evaluation of all parameter choices of the stopping criteria please see Figure 4 where neural persistence shows competitive performance for all scenarios. Section A.7 extends the findings on freeing validation data to a second data set and to a second real-world scenario. This includes the following point: Training loss stops earlier than neural persistence with only slightly lower performance on the test set. However, training loss fluctuates with varying the batch size and has no theoretical guarantees. (you were reading the plot right)", "We would like to thank the reviewers for their valuable insights and remarks that we address individually below. We significantly extended the paper and the supplementary materials, focusing particularly (as suggested by reviewers 1 and 3) on providing a thorough analysis and evaluation of our early stopping criterion. Moreover, as recommended by R1, we now discuss additional data sets. Due to the requested changes, we updated Section 4.2 and show the ‘Fashion-MNIST’ data set (in the main paper), while describing results for other data sets in the appendix.\n\nTo summarize the changes:\n\n- In Section 4.2, we conducted a detailed analysis of our early stopping criterion for different sets of hyperparameters and different data sets (three image classification data sets: MNIST, CIFAR-10, Fashion-MNIST, and one text classification dataset: IMDB). Our stopping criterion generalizes well to other situations and is competitive with validation loss-based stopping criteria.\n\n- We extended the theoretical section (in the supplementary materials, Section A.4) to include details of normalized neural persistence computation on convolutional layers. Moreover, we describe preliminary experiments about early stopping based on convolutional layers.\n\n- We describe properties and limitations of the measure (in the supplementary materials, Section A.5): initialization of networks with high neural persistence does not, as expected, correlate with higher accuracy, for example.\n\n- We describe the behaviour of neural persistence for deep architectures as well as its\nrelationship with the learned data distribution (Section A.6 in the appendix).\n\nWe extended the discussion of early stopping in scenarios with scarce data to show that our measure behaves as expected, i.e. it stops earlier when overfitting can occur, and it stops later when longer training is beneficial (Section A.7).\n\nIndividual answers to your review:\n\nThanks for the comments and suggestions! In light of your review, we have restructured the paper to highlight certain details more prominently.\n\n> [...] This raises other related questions: what if you just take all the weights of all edges? [...] I am worried that the p-norms of these edge sets might have the same effect; they converge as the training converges.\n\nWe clarify this relationship in Section 3.1. Our measure is linked to the $p$-norm of the top n edges (whose linear transformation is in fact used for our lower bound in Theorem 2) and is therefore also somewhat related to the $p$-norm of all the weights. We originally discussed the $p$-norm of all weights in Section 4.2; we have since moved this item to the discussion to make it more prominent.\n\nThe $p$-norm of all weights is not a valid early stopping criterion as it was never triggered earlier in our experiments (it only works for perceptrons); our neural persistence measure is thus an earlier marker of training convergence.\n\n> The experiment is not quite convincing. For example, what if we stop the training as soon as the improvement of validation accuracy slows down (converges with a much looser threshold)? [...] \n\nAs a response to the reviews, we performed a broader evaluation of the early stopping criterion. Concerning the looser threshold, we discuss in the paper (Section 4.2) that we refrained from tuning this parameter as it is highly sensitive to the scale of the monitored measure (making it harder to compare fairly). Currently, our evaluation follows the parameters of the Keras callback, which sets `min_delta` to zero by default.\n\n> Some other ideas/experiments might worth exploring: taking the persistence over the whole network rather than layer-by-layer, what happens with networks with batch-normalization or dropout?\n\nWe initially considered integrating persistence of the whole network but left it for future work: preliminary results showed some promise but also indicated the need for a more complicated setup: we need to account for higher-order topological features and ensure that features are not being \"masked\" due to different scales. When using information of the whole network in a straightforward extension of our method, we observe that this is sufficient for describing shallow networks only. A new experiment of neural persistence for deeper architectures (Section A.5) shows that variability is not a simple function of depth, making it clear that the contributions of each layer need to be carefully considered if we want to have one filtration over the whole network. It is possible that Dowker complexes (https://arxiv.org/pdf/1608.05432.pdf) may be useful here because they are capable of capturing directions, but due to time constraints we were unable to pursue this idea during the rebuttal phase.\n\nAs for the batch normalization and the dropout, we observed in Fig. 3 of the paper that while there is almost no difference between regular training and batch normalization, dropout generally increases the measured neural persistence.", "We would like to thank the reviewers for their valuable insights and remarks that we address individually below. We significantly extended the paper and the supplementary materials, focusing particularly (as suggested by R1/R3) on providing a thorough analysis/evaluation of our early stopping criterion. As recommended by R1, we discuss additional data sets. Given the requested changes, we updated Section 4.2 and show the ‘Fashion-MNIST’ data set in the paper, while describing other results in the appendix.\n\nSummary of changes:\n\n- In Section 4.2, we added a detailed analysis of our early stopping criterion for different parameters and data sets (MNIST, CIFAR-10, Fashion-MNIST, IMDB). Our criterion generalizes well and is competitive with validation loss-based stopping criteria.\n\n- We added a theoretical section (Section A.4) on conv layers, and describe preliminary experiments about early stopping for CNNs.\n\n- We describe properties and limitations (Section A.5): initialization of networks with high neural persistence does not, as expected, correlate with higher accuracy, for example.\n\n- We describe the behaviour of neural persistence for deep architectures plus its relationship with the data distribution (Section A.6).\n\n- We extended the discussion of early stopping in data scarce scenarios: we stop earlier when overfitting can occur, and we stop later when longer training is beneficial (Section A.7).\n\nIndividual answers to your review:\n\nThanks for your comments and questions! We prepared a new set of experiments to address them.\n\n> [...] how this could be generalized, e.g., to convolution layers [...]\n\nWe updated the appendix with a Section on conv layers (A.4). Two observations arise: (1) there's a computational issue; 'unrolling' each filter into a weight matrix requires more time than for FCNs; we sketch a new approximative algorithm. (2) our filtration focuses on the edge neighbourhood of vertices (neurons), which is relatively redundant in CNNs, so our current method does not capture the relevant topology of a CNN. We hypothesize that we should include activations and plan to investigate this in future work.\n\nDespite these hurdles, we added a new experiment on CNNs: we unrolled each conv filter into a graph, computing our measure per filter, and summed over all filters of a layer (corresponding to our setting in FNCs). We exploit redundancy of filter values, which leads to a simplification of our filtration and an approximation of NP (see Algorithm 3).\n\nIn our early stopping experiments we see that our measure performs better at early stopping on FCNNs as compared to CNNs, which empirically confirms our theoretical scepticism towards directly applying our edge-focused filtration to CNNs.\n\n> I do think that the results on MNIST are convincing, however, already on CIFAR-10 [...] seems to be very sensitive to the choice of g [...]. So, this raises the obvious question of how this behaves for larger networks with more layers and larger datasets.\n\nWe have updated our previous setup to be more inclusive. We perform an extensive evaluation for early stopping on different data sets. For CIFAR-10, we can show that both measures (val. loss and neural pers.) are sensitive to the parameters. Figure A.5 shows that there a good scenarios for each of the measures. Out of all the data sets, our epoch/accuracy differences are worst here; we are comparable, if not better than, validation loss in terms of parameter sensitivity, though. Plus, the new experiment permits us to link performance back to the training itself (Figure A.6). There appears to be a relation between mediocre training performance of an FCN on CIFAR-10 and the mediocre early stopping behaviour. In the future, we'd like to properly extend our method to CNNs; this has challenges that need to be addressed outside the scope of this paper.\n\n> Specifically, I would be interested in having full-connected networks with more layers [...]\n\nWe have performed additional experiments for deeper neural networks: we observe an interesting relationship between network depth and the variability of our measure (Section A.6).\n\n> What is the subscript $d$ in $\\mathcal{D}_d$ intended to denote?\n\nThis indicates the dimension of the corresponding persistence diagram.\n\n> In Thm.1 - why should \\phi_k be unique? [...]\n\nWe require only one function that returns the current weight of an edge. We have since removed the adjective for clarity.\n\n> End of Sec. 4. - \"it is beneficial to free validation data ...\" - What does that mean?\n\nIn deep learning applications, where sample size is critical, it can make a difference to free X% of the samples normally used for validation and use them for training instead. Accuracy and generalization capabilities were shown to be highly dependent on the amount of training data. We have since rewritten that section (end of Section 4.2) and included a new set of experiments (Section A.7 in the appendix).", "Thanks for the input; in the revised version, we have included this reference.", "The paper proposes the notion of \"neural persistence\", i.e., a topological measure to assign scores to fully-connected layers in a neural network. Essentially, a simplicial complex is constructed by considering neurons as 0-simplices and connections as 1-simplices. Using the (normalized) connection weights then facilitates to define a filtration. Persistent homology (for 0-dim. homology groups) then provides a concise summary of the evolution of the 0-dim. features over the filtration in the form of a barcode. The p-norm of the persistence diagram (containing points (1,w_i)) is then used to define the \"neural persistence\" NP(G_k) of a layer G_k; this measure is averaged over all layers to obtain one final neural persistence score. Thm. 1 establishes lower and upper bounds on N(G_k); Experiments show that neural persistence, measured for small networks on MNIST, aligns well with previous observations that batch-norm and dropout are benefical for generalization and training. Further, neural persistence it can be used as an early stopping criterion without having to rely on validation data.\n\nOverall, I think this is an interesting and well-written paper with a good overview of related work in terms of using TDA approaches in machine learning. The theoretical aspects of the work (i.e., the bounds) are fairly obvious. The bounds \nare required, though, for proper normalization. Using 0-dim. persistent homology is also appropriate in this context, as I tend to agree with the authors that this aspect is the most interesting one (and also the only computationally feasible one if this needs to be done during training). \n\nThe only major concern at this point, is the experimental evaluation on small fully-connected networks. \nWhile reading the paper, I was wondering how this could be generalized, e.g., to convolution layers, as the strategy seems to be also applicable in this context as well. I do think that the results on MNIST are convincing, however, already on CIFAR-10 the early stopping criterion seems to be very sensitive to the choice of g (from what I understood). So, this raises the obvious question of how this behaves for larger networks with more layers and larger datasets. If the contribution boils down to a confirmation that dropout and batch-norm are beneficial, this would substantially weaken the paper. Specifically, I would be interested in having full-connected networks with more layers (possibly less neurons per layer). Maybe the authors can comment on that or perform experiments along this direction.\n\nMinor comments:\n\n- What is the subscript d in \\mathcal{D}_d intended to denote?\n- In Thm.1 - why should \\phi_k be unique? This is not the only choice?\n- End of Sec. 4. - \"it is beneficial to free validation data ...\" - What does that mean?", "This paper proposes to analyze the complexity of a neural network using its zero-th persistent homology. Each layer is considered a bipartite graph with edge weights. As edges are being added in a monotonically decreasing order, each time a connected component is merged with others will be recorded as a new topological feature. The persistence of each topological feature is measured as the weight difference between the new edge and the maximal weight (properly normalized). Experiments show that by monitoring the p-norm of these persistence values one can stop the training a few epochs earlier than the validation-error-based early stopping strategy, with only slightly worse test accuracy.\n\nThe proposed idea is interesting and novel. However, it is needs a lot of improvement for the following reasons.\n\n1) The proposed idea can be explored much deeper. Taking a closer look, these zero-th persistence are really the weights of the maximum spanning tree (with some linear transformation). So the proposed complexity measure is really the p-norm of the MST. This raises other related questions: what if you just take all the weights of all edges? What if you take the optimal matching of the bipartite graph? How about the top K edges? I am worried that the p-norms of these edge sets might have the same effect; they converge as the training converges. These different measurements should be at least experimentally compared in order to show that the proposed idea is crucial. \n\nNote also that most theoretical proofs are straightforward based on the MST observation.\n\n2) The experiment is not quite convincing. For example, what if we stop the training as soon as the improvement of validation accuracy slows down (converges with a much looser threshold)? Wouldn’t this have the same effect (stop slightly earlier with only slightly worse testing accuracy)? Also shouldn’t the aforementioned various alternative norms be compared with?\n\n3) Some other ideas/experiments might worth exploring: taking the persistence over the whole network rather than layer-by-layer, what happens with networks with batch-normalization or dropout?\n\n\n\n", "Thanks for pointing this out! At the time of us writing this paper, the article that you mentioned was not published in a peer-reviewed venue. Is there a more recent version that we missed?", "To whom it may concern,\n\nIt appears that this paper is missing a relevant citation for recent work in measures of capacity for neural networks using algebraic topology: \"On Characterizing the Capacity of Neural Networks using Algebraic Topology\" ( https://arxiv.org/abs/1802.04443 ).\n\n" ]
[ -1, 7, -1, -1, -1, -1, -1, 6, 4, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 4, -1, -1 ]
[ "ByxpGah6R7", "iclr_2019_ByxkijC5FQ", "BJgk_yOORX", "HygcYhVqnm", "r1x6sUQ6nm", "B1xgWZA02m", "BJe77QJ_qX", "iclr_2019_ByxkijC5FQ", "iclr_2019_ByxkijC5FQ", "BJe77QJ_qX", "iclr_2019_ByxkijC5FQ" ]
iclr_2019_Byxpfh0cFm
Efficient Augmentation via Data Subsampling
Data augmentation is commonly used to encode invariances in learning methods. However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set. The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply. In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset. We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation.
accepted-poster-papers
The paper proposes several subsampling policies to achieve a clear reduction in the size of augmented data while maintaining the accuracy of using a standard data augmentation method. The paper in general is clearly written and easy to follow, and provides sufficiently convincing experimental results to support the claim. After reading the authors' response and revision, the reviewers have reached a general consensus that the paper is above the acceptance bar.
test
[ "HylD3z4ch7", "SkgnhejSC7", "SJgIeO84CQ", "r1xc5LLE0m", "Hkldxs7qnX", "SyxMijTQhX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The authors study the problem of identifying subsampling strategies for data augmentation, primarily for encoding invariances in learning methods. The problem seems relevant with applications to learning invariances as well as close connections with the covariate shift problem. \n\nContributions: The key contributions include the proposal of strategies based on model influence and loss as well as empirical benchmarking of the proposed methods on vision datasets. \n\nClarity: While the paper is written well and is easily accessible, the plots and the numbers in the tables were a bit small and thereby hard to read. I would suggest the authors to have bigger plots and tables in future revisions to ensure readability. \n\n>> The authors mention in Section 4.1 that \"support vector are points with non-zero loss\": In all generality, this statement seems to be incorrect. For example, even for linearly separable data, a linear SVM would have support vectors which are correctly classified. \n\n>> The experiment section seems to be missing a table on the statistics of the datasets used: This is important to understand the class distribution in the datasets used and if at all there was label imbalance in any of them. It looks like all the datasets used for experimentation had almost balanced class labels and in order to fully understand the scope of these sampling strategies, I would suggest the authors to also provide results on class imbalanced datasets where the distribution over labels is non-uniform. \n\n>> Incomprehensive comparison with benchmarks: \na) The comparison of their methods with VSV benchmark seems incomplete. While the authors used the obtained support vectors as the augmentation set and argued that it is of fixed size, a natural way to extend these to any support size is to instead use margin based sampling where the margins are obtained from the trained SVM since these are inherently margin maximizing classifiers. Low margin points are likely to be more influential than high margin points.\nb) In Section 5.3, a key takeaway is \"diversity and removing redundancy is key in learning invariances\". This leads to possibly other benchmarks to which the proposed policies could be compared, for example those based on Determinantal point processes (DPP) which are known for inducing diversity in subset selection. There is a large literature on sampling diverse subsets (based on submodular notions of diversity) which seems to be missing from comparisons. Another possible way to overcome this would be to use stratified sampling to promote equal representation amongst all classes. \nc) In Section 2, it is mentioned that general methods for dataset reduction are orthogonal to the class of methods considered in this paper. However, on looking at the data augmentation problem as that of using fewest samples possible to learn a new invariance, it can be reduced to a dataset reduction problem. One way of using these reduction methods is to use the selected set of datapoints as the augmentation set and compare their performance. This would provide another set of benchmarks to which proposed methods should be compared.\n\n>> Accuracy Metrics: While the authors look at the overall accuracy of the learnt classifiers, in order to understand the efficacy of the proposed sampling methods at learning invariances, it would be helpful to see the performance numbers separately on the original dataset as well as the transformed dataset using the various transformations. \n\n>> Experiments in other domains: The proposed schemes seem to be general enough to be applicable to domains other than computer vision. Since the focus of the paper is the proposal of general sampling strategies, it would be good to compare them to baselines on other domains possibly text datasets or audio datasets. ", "Thank you for your detailed review and feedback. We’ve updated the paper to address your feedback on the datasets, subset selection method, and margin-based approach. We summarize these edits and address remaining comments below.\n\n[Dataset statistics] We’ve added the dataset class statistics in the Appendix under “Experiment Details”. NORB is balanced and the other two datasets are slightly imbalanced. We have also made the plots/tables slightly larger to improve readability as per your suggestion.\n\n[Subset selection] We agree that diversity-inducing subset selection techniques have the potential to be useful in this setting, as mentioned in our discussion section. We have included two sets of experiments with simple subset selection techniques: (1) a stratified sampling approach using k-means clustering, and (2) an implementation of augmentation with DPPs. The DPP approach used both bottleneck features alone and bottleneck features combined with influence. While these methods improve upon random sampling, they generally don’t outperform the performance of the proposed greedy influence/loss based approach. When combined in conjunction with influence/loss, however, they can obtain competitive performance compared to our proposed method (though with additional costs). Please find full results on these experiments in Appendix H and I. Thank you for this suggestion.\n\n[Margin-based approach] Prior work on VSVs considered this method only for a fixed set of support vectors. A contribution of our work is to draw on this prior art but note that metrics such as influence and loss are both more generally applicable and also allow the sampling to be considerably more flexible. For completeness, we have also included the margin-based approach that you suggest, which is a more direct generalization of VSV. However, this method does not improve upon the loss/influence approach. Results are provided in Appendix F.\n\n[Accuracy metrics] In our experiments, we transform the data simply to highlight the impact of augmentation, which allows us to illustrate the effect of subsampling strategies more clearly.", "Thank you for your thoughtful review.\n\n[Efficiency of approach] First, we note that there are benefits of our approach beyond efficiency. Determining the correct set of augmentations to apply is often a manual and time-consuming process, and applying augmentations to a small set of points can help to make this approach more user-friendly and interpretable (as more sophisticated, data-point specific augmentations can be applied, and general augmentations can be more readily diagnosed).\n\nIn terms of efficiency alone, we also note that selecting augmentations is often not a one-shot process: it may involve continually re-training a model and evaluating held-out accuracy to determine the best set of transformations. Therefore the efficiency improvements that result from reducing the dataset size may be compounded over multiple iterations.\n\nWith regards to just a single application of data augmentation: For a known set of augmentations, the expected dataset reduction of our approach is (n_original + n_augmentations*sample_size) compared to (n_original + n_augmentations*n_original). As training time is linear to superlinear with the dataset size, this can provide a rough estimate of the time savings depending on the size of the sample.\n\nHowever, we understand that true efficiency savings can vary somewhat depending on the implementation of interest. For completeness, we have therefore performed an empirical study to estimate the practical efficiency of our approach in relation to the number of augmentations applied. These results, performed in Tensorflow, show a linear relationship between the number of training examples and the time per epoch. Full results are provided in Appendix G.\n\n[Two-stage approach] Although for CIFAR and NORB we freeze earlier layers, note that for MNIST, we fully retrain the model (as stated in the first paragraph in Sec 5). We have thus explored both settings. The two-stage approach can be viewed as an extension to classical feature extraction techniques (e.g., SIFT, HOG). An example common in natural language processing is word embeddings, which can be learned in one-shot approach (e.g., neural language models) or used in a two-stage approach (e.g., Word2Vec with a classifier). A similar example can be seen in vision with Face Embeddings (Schroff et. al., CVPR 2015). Also note that the model is trained once, and then can be used for continual improvements. For large datasets, it may be impractical to retrain a full deep network for every modification to the experiment.\n\n[Limited empirical studies, understanding of policies] We disagree that the empirical studies performed are limited in nature, or that we have made little effort to understand the policies. We have explored not only the proposed influence-based approach across several datasets, but have also explored around this space -- including several natural variants of the method (e.g., updating, re-weighting, loss). This set has now been expanded even further to consider diversity-inducing techniques. In our experiments, we have been careful to compare against natural baselines and related work (such as the VSV method and random sampling). In terms of developing an understanding for our approach, we provide an early analysis (Section 4.1) that explains why we expect the method to work, and then validate this intuition in our experiments (Section 5.1-5.2) and exploration of the resulting samples (Section 5.3 and Appendix E).", "Thank you for your encouraging review. We note that we have made a few additions to our original paper to strengthen the submission. In particular, we have: (i) more thoroughly compared against diversity-inducing subset selection baselines (as mentioned to AnonReviewer3), (ii) validated the efficiency improvements of our approach (in response to AnonReviewer2), and (iii) made cosmetic adjustments to the writing and plotting throughout to increase clarity. These additional edits further validate our initial approach and help to better illustrate the method.", "Data augmentation is a useful technique, but can lead to undesirably large data sets. The authors propose to use influence or loss-based methods to select a small subset of points to use in augmenting data sets for training models where the loss is additive over data points, and investigate the performance of their schemes when logistic loss is used over CNN features. Specifically, they propose selecting which data points to augment by either choosing points where the training loss is high, or where the statistical influence score is high (as defined in Koh and Liang 2017). The cost of their method is that of fitting an initial model on the training set, then fitting the final model on the augmented data set.\n\nThey compare to reasonable baselines: no augmentation, augmentation by transforming only a uniformly random chosen portion of the training data, and full training data augmentation; and show that augmenting even 10% of the data with their schemes can give loss competitive with full data augmentation, and lower than the loss achievable with no augmentation or augmentation of a uniformly random chosen portion of the data of similar size. Experiments were done on MNIST, CIFAR, and NORB.\n\nThe paper is clearly written, the idea is intuitively attractive, and the experiments give convincing evidence that the method is practically useful. I believe it will be of interest to a large portion of the ICLR community, given the usefulness of data augmentation.", "This paper considers how to augment training data by applying class-preserving transformations to selected datapoints.\nIt proposes improving random datapoint selection by selection policies based on two metrics: the training loss \nassociate with each datapoint (\"Loss\"), and the influence score (from Koh and Liang that approximates leave-one-one test loss). The authors consider two policies based on these metrics: apply transformations to training points in decreasing \norder of their score, or to training points sampled with probability proportional to score. They also consider two \nrefinements: downweighting observations that are selected for transformation, and updating scores everytime \ntransformations associated with an observation are added. \n\nThe problem the authors tackle is important and their approach is natural and promising. On the downside, the theoretical \ncontribution is moderate, and the empirical studies quite limited. \n\nThe stated goals of the paper are quite modest: \"In this work, we demonstrate that it is possible to significantly reduce the \nnumber of data points included in data augmentation while realizing the same accuracy and invariance benefits of \naugmenting the entire dataset\". It is not too surprising that carefully choosing observations according suitable policies \nis an improvement over random subsampling, especially, when the test data has been \"poisoned\" to highlight this effect. \nThe authors have demonstrated that two intuitive policies do indeed work, have quantified this on 3 datasets. \n\nHowever they do not address the important question of whether doing so can improve training time/efficiency. In other words, the authors have not attempted to investigate the computational cost of trying to assign importance scores to each observation. Thus this paper does not really demonstrate the overall usefulness of the proposed methodology.\n\nThe experimental setup is also limited to (I think) favor the proposed methodology. Features are precomputed on images using a CNN, and the different methods are compared on a logistic regression layer acting on the frozen features. The existence of such a pretrained model is necessary for the proposed methods, otherwise one cannot assign selection scores to different datapoints. However, this is not needed for random selection, where the transformed inputs can directly be input to the system. A not unreasonable baseline would be to train the entire CNN with the augmented 5%,10%, 25% datasets, rather than just the last layer. Of course this now involves training the entire CNN on the augmented dataset, rather than just the last layer, but how relevant is the two stage training approach that the authors propose?\n\nIn short, while I think the proposed methodology is promising, the authors missed a chance to include a more thorough analysis of the trade-offs of their method.\n\nI also think the paper makes only a minimal effort to understand the policies, the experiments could have helped shed some more light on this.\n\nMinor point:\nThe definition of \"influence\" is terse e.g. I do not see the definition of H anywhere (the Hessian of the empirical loss)" ]
[ 6, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Byxpfh0cFm", "HylD3z4ch7", "SyxMijTQhX", "Hkldxs7qnX", "iclr_2019_Byxpfh0cFm", "iclr_2019_Byxpfh0cFm" ]
iclr_2019_ByzcS3AcYX
Neural TTS Stylization with Adversarial and Collaborative Games
The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (x_txt, x_aud) by encouraging its output to reconstruct x_aud. The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, the proposed model delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions. Our model achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice).
accepted-poster-papers
The paper proposes using GANs for disentangling style information from speech content, and thereby improve style transfer in TTS. The review and responses for this paper have been especially thorough! The authors significantly improved the paper during the review process, as pointed out by the reviewers. Inclusion of additional baselines, evaluations and ablation analysis helped improve the overall quality of the paper and helped alleviate concerns raised by the reviewers. Therefore, it is recommended that the paper be accepted for publication.
test
[ "HyeiSBJQT7", "SJg099o2RX", "BJl11ws2AX", "B1lxfzs30Q", "r1gu2kHsAQ", "r1esueVMp7", "BygbTKVoR7", "ryl7IwesAX", "rkeebN1oR7", "ByejWaRc0Q", "SJeKU5Ic27", "H1gefjatCm", "HJgZD2nYRQ", "Hklxr33Y0m", "rJe7lY3KCQ", "BJebIsvXpm", "BJxRuqPQam", "Byg1SllT37" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes to use GAN to disentangle style information from speech content. The presentation of the core idea is clear but IMO there are some key missing details and experiments.\n\n* The paper mentions '....the model could simply learn to copy the waveform information from xaud to the output and ignore s....' \n-- Did you verify this is indeed the case? 1) The style embedding in Skerry-Ryan et al.'18 serves as a single bottleneck layer, which could prevent information leaking. What dimension did you use, and did you try to use smaller size? 2) The GST layer in Wang et al.'18 is an even more aggressive bottleneck layer, which could (almost) eliminate style info entangled with content info. \n\n* The sampling process to get x_{aud}^{-} needs more careful justifications/ablations.\n-- Is random sampling enough? What if the model samples a x_{aud}^{-} that has the same speaking style as x_{aud}^{+}? (which could be a common case).\n\n* Did you consider the idea in Fader Netowrks (Lample et al.'17)', which corresponds to adding a simple adversarial loss on the style embedding? It occurs to be a much simpler alternative to the proposed method.\n\n* Table 1. \"Tacotron2\" is often referred to Shen et al.'18, not Skerry-Ryan et al.'18. Consider using something like \"Prosody-Tacotron\"?\n\n* The paramerters used for comparisons with other models are not clear. Some of them are important detail (see the first point above)\n\n* The author mentioned the distance between different clusters in the t-SNE plot. Note that the distance in t-SNE visualizations typically doesn't indicate anything.\n\n* 'TTS-GAN' is too general as the name for the proposed method.", "Thanks for willing to include the ablation studies & new discussion. I trust the authors and will raise my old rating by one level.", "We sincerely appreciate your constructive suggestions!\n\nre: Ablation study on the reference embedding size and the number of heads in multihead attention\nWe feel that this is a good suggestion! We agree that conducting more comprehensive analyses with different degrees of \"information bottleneck\" -- i.e., the size of reference embedding and the number of heads in the multihead attention -- will provide interesting insights on how our model behaves. We promise to include this ablation study in the Appendix.\n\nre: Fader Networks\nIt is interesting to hear that, unlike our case, adding an adversarial loss for style transfer helped achieve better results (we'd be curious to see the results; do you have a paper we can check? was your experience based on images only or also based on audio signals?) Regardless, we feel that this will be an easy-to-add baseline approach; we will include this in the final version. ", "I agree with the other reviewers that the paper is significantly improved compared to the last version. I appreciate the author's efforts!\n\nTwo of my main concerns remain, however:\n* Some of the comparisons depend on the setting of the key hyper-params in the prosody-tacotron and GST models. E.g. the authors mention that the proposed model synthesizes speech with a closer speaking style to reference than GST does. Did you observe the same trend when you use a bigger reference embedding size or more heads in the multiheaded attention in GST? (It doesn't need to consistently beat GST, etc. but a depiction of the performance trend, or trade-off, would be helpful to better understand the work)\n\n* I am surprised by the author's results with Fader network style adversarial loss for style transfer, which also contradicts to my own experience. I don't think that idea is specific to image. As an obvious baseline, at least, I think the authors should put some relevant discussions around it in the paper.\n\nAgain, thanks for the authors' thorough comments and the willingness to change the paper contents!", "I want to thank you, the authors for the significant amount of work that was made to the paper during this comment period. \n\nI've revised my assessment upwards on the basis of this effort.", "Overview: This paper describes an approach to style transfer in end-to-end speech synthesis by extending the reconstruction loss function and augmenting with an adversarial component and style based loss component.\n\nSummary: This paper describes an interesting technical approach and the results show incremental improvement to matching a reference style in end-to-end speech synthesis. The three-component adversarial loss is novel to this task. While it has technical merit, the presentation of this paper make it unready for publication. The technical descriptions are difficult to follow in places, it makes some incorrect statements about speech and speech synthesis and its evaluation is lacking in a number of ways. After a substantial revision and additional evaluation, this will be a very good paper.\n\nThe title of the paper and moniker of this approach as “TTS-GAN” seems to preclude the fact that in the last few years there have been a number of approaches to speech synthesis using GANs. By using such a generic term, it implies that this is the “standard” way of using a GAN for TTS. Clearly it is not. Moreover, other than the use of the term, the authors do not claim that it is. \n\nWhile the related works regarding style modeling and transfer in end-to-end TTS models are well described, prior work on using GANs in TTS is not. (This may or may not be related to the previous point.) For example, but not limited to:\nYang Shan, Xie Lei, Chen Xiao, Lou Xiaoyan, Zhu Xuan, Huang Dongyan, and Li Haizhou, Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under a Multi-task Learning Framework, ASRU, 2017\nYuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari, Text-to-speech Synthesis using STFT Spectra Based on Low- /multi-resolution Generative Adversarial Networks, ICASSP 2018\nSaito Yuki, Takamichi Shinnosuke, and Saruwatari Hiroshi, Training Algorithm to Deceive Anti-spoofing Verification for DNN-based Speech Synthesis, ICASSP, 2017.\n\nSection 2 describes speech synthesis as a cross-domain mapping problem F : S -> T, where S is text and T is speech. (Why a text-to-speech mapping is formalized as S->T is an irrelevant mystery.) This is a reasonable formulation, however, this is not a bijective mapping. There are many valid realizations s \\subset T of a text utterance t \\in S. The true mapping F is one-to-many. Contrary to the statement in Section 2, there should not be a one-to-one correspondence between input conditions and the output audio waveform and this should not be assumed. This formalism can be posed as a simplification of the speech synthesis mapping problem. Overall Section 2 lays an incorrect and unnecessary formalism over the problem, and does very little in terms of “background” information regarding speech synthesis or GANs. I would recommend distilling the latter half of the last paragraph. This content is important -- the goal of this paper is to disentangle the style component (s) from the “everything else” component (z) in x_{aud} by which the resultant model can be correctly conditioned on s and ignore z.\n\nSection 3.2 Style Loss: The parallel between artistic style in vision and speaking style in speech is misplaced. Artistic style can be captured by local information by representing color choices, brush technique, etc. Speaking style and prosodic variation more broadly is suprasegmental. That is it spans multiple speech segments (typically defined as phonetic units, phonemes, etc.). It is specifically not captured in local variations in the time-frequency domain. The local statistics of a mel-spectrogram are empoverished to capture the long term variation spanning multiple syllables, words, and phrases that contribute to “speaking style”. (In addition to the poor motivation of using low-level filters to capture speaking style, the authors describe “prosody” as “representing the low-level characteristics of sound”. This is not correct.) These filter activations are more likely to capture voice quality and speaker identity characteristics than prosody and speaking style.\n\nSection 3.2: Reconstruction Loss: The training in this section is difficult to follow. Presumably, l is the explicit style label from the data, the emotion label for EMT-4 and (maybe) speaker id for VCTK. It is a rather confusing choice to refer to this as “latent” since this carries a number of implications from variational techniques and bayesian inference. Similarly, It is not clear how these are trained. Specifically, both terms are minimized w.r.t. C but the second is minimized only w.r.t G. I would recommend that this section be rewritten to describe both the loss functions, target variables, and the dependent variables that are optimized during training.\n\nSection 3.3 How are the coefficients \\alpha and \\beta determined?\n\nSection 3.3 “We train TTS-GAN for at least 200k steps.” Why be vague about the training?\n\nSection 3.3. “During training R is fixed weights” Where do these weights come from? Is it an ImageNet classifier similar with a smaller network than VGG-19?\n\nSection 5: The presentation of results into Table 1 and Table 2 is quite odd. The text material references Table 1 in Section 5.1, then Table 2 in Section 5.2, then Table 1 in Section 5.3 and then Table 2 again in Section 5.3. It would be preferable to include the tabular material which is being discussed in the same order as the text.\n\nSection 5: Evaluation. It is surprising that there is no MOS or naturalness evaluation of this work. In general increased flexibility of a style-enabled system results in decreased naturalness. While there are WER results to show that intelligibility (at least machine intelligibility) may not suffer, the lack of an MOS result to describe TTS quality is surprising.\n\nSection 5: The captions of Tables 1 and 2 should provide appropriate context for the contained data. There is not enough information to understand what is described here without reference to the associated text.\n\nSection 5.1: The content and style swapping is not evaluated. While samples are provided, it is not at all clear that the claims made by the authors are supported by the data. A listening study where subjects are asked to identify the intended emotion of the utterance would be a convincing way to demonstrate the effectiveness of this technique. As it stands, I would recommend removing the section titled “Content and style swapping” as it is unempirical. If the authors are committed to it, it could be reasonably moved to the conclusions or discussion section as anecdotal evidence.\n\nSection 5.3: Why use a pre-trained WaveNet based ASR model? What is its performance on the ground truth audio? This is a valuable baseline for the WER of the synthesized material.\n\nSection 5.3 Style Transfer: Without support that the subject ratings in this test follow a normal distribution a t-test is not a valid test to use here. A non-parametric test like a Mann-Whitney U test would be more appropriate.\n\nSection 5.3 Style Transfer: “Each listened to all 15 permutations of content”. From the previous paragraph there should be 60 permutations.\n\nSection 5.3 Style Transfer: Was there any difference in the results from the 10 sentences from the test set, and the 5 drawn from the web?\n\nTypos:\nSection 1 Introduction: “x_{aud}^{+} is unpaired” -> “x_{aud}^{-} is unpaired”\nSection 2: “Here, We” -> “Here, we”\nSection 5.3 “Tachotron” -> “Tacotron”", "Naturalness:\nThanks for your constructive comments. It is correct that our evaluation is focused on the disentanglement of style and content, rather than directly assessing the naturalness of the TTS results, because disentangling content/style is the major focus of our work. In hindsight, however, we do agree with your point that measuring the naturalness could have provided additional insights into how our model performs compared to the baseline TTS systems. We promise to add a MOS evaluation results in the final version of our paper.\n\n\nSwapping:\nWe also agree with the reviewer on this point. We will add human classification results on the style swapping experiment.\n\n\nStyle transfer:\nWe appreciate your clarification on evaluation metrics for our subjective study. Yes, we do agree with your comments, and will modify our metric based on a non-parametric test.", "Bijective mapping:\nThanks for the constructive suggestion. We do agree that the bijective constraint might be too strict; injective mapping could be more appropriate to illustrate our setting. We have incorporated your two suggestions into the new revision. (Note: Since this discussion was at the very last minute, which was past the rebuttal period, we could not upload the new version of the paper. But the change is already made and will be reflected in the final version.)\n\nStyle loss:\nThanks for the clarification. Yes, we do agree that prosody, in its entirety, cannot be captured using local statistics in the time-frequency domain. As we clarified above, our style loss is limited to capturing only certain elements of prosody. To reflect this, we have already removed our statement regarding style loss and prosody in the revision.", "MOS:\nThe comment was about assessment of naturalness of the resultant speech, not prescribing an MOS test specifically. In general prosodic modifications lead to decreased quality. Assessing how large this degradation is valuable in assessing this work. For what it's worth, both the GST and prosody-tacotron papers would significantly benefit from this kind of evaluation and I find it surprising that they were omitted. \n\n\"So other than some evaluation metrics used in regular TTS, we also performed a set of experiments that do not typically appear in TTS work.\"\nThe main evaluation metrics for \"regular TTS\" are subjective tests that look at naturalness and to a lesser extent (given the state of the art) intelligibility. The typical tests are MOS, MUSHRA, or ABX (AXB, AXY). The intelligibility dimension has been assessed by the WER evaluation. Naturalness (or quality) has not. It might be reasonable to claim that for this work TTS quality (measured in terms of naturalness) is not important, and is therefore not evaluated.\n\n\"The most important claim in our paper is the ability to disentangle content and style. We believe this [swapping] experiment actually is most important evaluation in validating our claim.\" \nI agree that this is the most important claim of the paper and the most important evaluation. This is why it is so surprising that there is no evaluation here. Rather 16 examples are offered. It would be reasonable to ask a human rater to assess the emotional content of the utterance as either neutral, happy, sad, angry. This seems to be the most direct assessment of the claim of the paper. Does the newly synthesized utterance contain the desired emotional information? The style transfer evaluation is much more effective at demonstrating this. The examples without evaluation are unconvincing. \n\nStyle transfer: \n\"To validate that the test follows a normal distribution would require a large amount of subjective studies. We followed the precedent in the most recent works (GST, and prosody-Tacotron).\"\n1) you do not need to use a t-test. There are non-parametric tests available (specifically the Mann-Whitney U-test) that do not assume that the observations follow a normal distribution. Most (all?) statistical packages support this test. 2) the test used in GST is not described. In prosody-tacotron a 95% confidence interval is described, but not a t-test. I hope that the confidence interval is generated non-parametrically in that work. Using a mean and standard deviation derived from observations that are not normally distribtued would have generated a biased estimate of the confidence intervals. 3) even if these papers did use an unsupported statistical test, the t-test is still not valid without confirmation that the analyzed ordinal subject responses follow a normal distribution (in most cases they do not). ", "Bijective mapping:\nEven conditioned on style tokens the mapping is not bijective. A given text, with the same condition (style), can be produced as \"angry\" with different acoustic realizations. The content, speaker and condition can all be transmitted and there are still valid variations of the realization. This could be theoretically true, if the condition is considered to be a specific prosodic realization P of speaker A speaking utterance X, and the target of generating speaker B speaking utterance X with realization P. However, 1) given the state of the art and understanding of prosody, it is very underdetermined, and not exactly useful. it is underdetermined because we do not have a way of disentagling prosodic realization from speaker identity. While we have some approaches to map from one speaker's pitch range to another, transformation of normal and affected speaking rhythm and voice qualities from one speaker to another are not well understood or all that thoroughly well studied. And 2) it's also not clear that this is the desired mapping. The goal is to retain the conditioning variable -- here a coarse description of affect. The realization of speaker A speaking utterance X with \"angry\" prosody in and of itself is not unique. Neither are the realizations of speaker B speaking utterance X with \"angry\" prosody. Even if there is a theoretical bijective mapping based on a highly specified condition, the practical mapping that is being learned here is many-to-many. The broader point is that the \"Ideal\" F is not even a function. The target is a set, not a point, f(x_txt, x_aud) = {t \\in trg_{txt, aud}} where trg_{txt, aud} is the set of all valid realizations of the text, txt, and conditioning information, aud, by the target speaker. \n\nStepping back, the concern with maximum likelihood that is being raised is that the learned F may not be injective, i.e. that the learned function may map multiple elements of the domain to the same realization and completely ignore x_{aud}. This is a fair concern. One issue with the term bijective is that determines that F should also be surjective -- that every element in \\hat{x} should be mappable from some x_txt and x_aud. This aspect isn't addressed by the work.\n\nMaking this discussion more constructive -- 1) consider removing the term \"Ideally\" from section 2. The description here is much more practical than it is ideal. 2) consider replacing bijective with injective. I believe it's more consistent with the problem that is being solved.\n\nStyle loss:\nMy initial description of prosody was perhaps too pointed at addressing the (since deleted) statement in the previous draft that claimed that prosody was only the low-level characteristics. Prosody does include local time-frequency elements -- particularly as they capture voice quality. The previous point was that prosody (in its entirety) cannot be captured by these representation. Prosody includes (but is not limited to) pitch (intonation), intensity, speaking rate/rhythm, and the use of pauses (usually, but not only to impact phrasing) as well as voice quality. The use of pitch and intensity are primarily relevant in a suprasegmental context. For example, in English(es), an absolute pitch observation carries very little information, but a rising or falling pitch contour (or contextualized within the speakers pitch range or register) can have significant information on the semantics pragmatics and paralinguistics of the utterance. I did not mean to suggest that there isn't important information in the time-spectrum. However, if you consider the literature on prosody as a whole you'll find that the relative value of local spectral content is much less relevant than suprasegmental content. (This includes the references mentioned in the comment above. There are corresponding papers for each of the tasks (sarcasm recognition, emotion recognition, prosody in speaker recognition) that show that suprasegmental representations of prosody are more valuable that short time analyses.) ", "This paper proposes a method to synthesize speech from text input, with the style of an input voice provided with the text. Thus, we provide content - text - and style - voice. It leverages recent - phenomenal - progress in TTS with Deep Neural Networks as seen from exemplar works such as Tacotron (and derivatives), DeepVoice, which use seq2seq RNNs and Wavenet families of models. The work is extremely relevant in that audio data is hard to generate (expensive) and content-style modeling could be useful in a number of practical areas in synthetic voice generation. It is also quite applicable in the related problem of voice conversion. The work also uses some quite complex - (and very interesting!) - proposals to abstract style, and paste with content using generative modeling. I am VERY excited by this effort in that it puts together a number of sophisticated pieces together, in what I think is a very sensible way to implement a solution to this very difficult problem. However, I would like clarifications and explanations, especially in regards to the architecture. \n\nDescription of problem: The paper proposes a fairly elaborate setup to inject voice style (speech) into text. At train time it takes in text samples $x_{txt}$, paired voice samples (utterances that have $x_{txt}$ as content) $s+$ and unpaired voice samples $s-$, and produces two voice samples $x+$ (for paired <txt, utterance>) and $x-$ (for unpaired txt/utterance). The idea is that at test time, we pass in a text sample $x_{txt}$ and an UNPAIRED voice sample $x_{aud}$ and the setup produces voice in the style of $x_{aud}$ but whose content is $x_{txt}$, in other words it generates synthetic speech saying $x_{txt}$. The paper goes on to show performance metrics based on an autoencoder loss, WER and t-SNE embeddings for various attributes. \n\nContext: The setup seems to be built upon the earlier work by Taigman et al (2016) which has the extremely interesting conception of using a {\\it ternary} discriminator loss to carry out domain adaptation between images. This previous work was prior to the seminal CycleGAN work for image translation, which many speech works have since used. Interestingly, the Taigman work also hints at a 'common' latent representation a la UNIT using coupled VAE-GANs with cycle consistency (also extremely pertinent), but done differently. In addition to the GAN framework by Taigman et al, since this work is built upon Tacotron and the GST (Global Style Tokens) work that followed it, the generative setup is a sophisticated recurrent attention based seq2seq model.\n\nFormulation:\nA conditional formulation is used wherein the content c (encoding generated by text) is passed along with other inputs in the generator and discriminator. The formulation in Taigman assumes that there is an invariant representation in both (image) domains with shared features. To this, style embeddings (audio) gets added on and then gets passed into the generator to generate the speech. Both c and s seem to be encoder outputs in the formulation. The loss components of what they call ‘adversarial’, ‘collaborative’ and ‘style’ losses. \n\nAdversarial losses\nThe ternary loss for D consists of \n\nDiscriminator output from ‘paired’ style embedding (i.e. text matching the content of paired audio sample)\nDiscriminator output from ‘unpaired’ style embedding (i.e text paired with random sample of some style)\nDiscriminator output from target ground truth style. The paper uses x_+, so I would think that it uses the paired sample (i.e. from the source) style.\n\nGenerator loss (also analogous to Taigman et al) consists of generations from paired and unpaired audio, possibly a loose analogue to source and target domains, although in this case we can’t as such think of ‘+’ as the source domain, since the input is text. \n\nCollaborative losses \nThis has two components, one for style (Gatys et al 2016) and a reconstruction component. The reconstruction component again has two terms, one to reconstruct the paired audio output ‘x+=x_audio+’ - so that the input content is reproduced - and the other to encourage reconstruction of the latent code. \n\nDatasets and Results:\nThey use two datasets: one, an internal ‘EMT-4’ dataset with 20k+ English speakers, and the other, the VCTK corpus. Comparisons are made with a few good baselines in Tacotron2, GST and DeepVoice2. \n\nOne comparison technique to test disentanglement ability is to compare autoencoder reconstructions with the idea that a setup that has learnt to disentangle would produce higher reconstruction error because it has learnt to separate style and content. \n\nt-SNE embeddings are presented to show visualizations of various emotion styles (neutral, angry, sad and happy), and separation of male and female voices. A WER metric is also presented so that generations are passed into a classifier (an ASR system trained on Wavenet). All the metrics above seem to compare excellently (better than?) with the others. \n\nQuestions and clarifications:\n\n(Minor) There’s a typo in page 2, line 2. x_{aud}^+ should be x_{aud}^-.\n\nClarification on formulation: Making the analogy (is that even the right way of looking at this?) that the ‘source’ domain is ‘+’, and the target domain is ‘-’, in equation (5), the last term of the ternary discriminator has the source domain (x_{aud}^+) in it, while the Taigman et al paper uses the target term. Does this matter? I would think ‘no’, because we have a large number of terms here and each individual term in and of itself might not be relevant, nor is the current work a direct translation of the Taigman et al work. Nevertheless, I would like clarification, if possible, on the discrepancy and why we use the ‘+’ samples. \n\nClarification on reconstruction loss: I think the way it is presented, equation (8) is misleading. Apparently, we are sampling from the latent space of style and content embeddings for paired data. The notation seems to be quite consistent with that of the VAE, where we have a reconstruction and a recognition model, and in effect the equation (8) is sampling from the latent space in a stochastic way. However, as far as I can see, the latent space here produces deterministic embeddings, in that c = f(x_{txt}) and s = g(x_{aud}^+), with the distribution itself being a delta function. Also, the notation q used in this equation most definitely indicates a variational distribution, which I would think is misleading (unless I have misinterpreted what the style tokens mean). At any rate, it would help to show how the style token is computed and why it is not deterministic. \n\nClarification on latent reconstruction loss: In equation (9), how is the latent representation ‘l’ computed? While I can intuitively see that the latent space ‘l’ (or z, in more common notation) would be the ‘same’ between real audio samples and the ‘+’, ‘-’ fake samples, it seems to me that they would be related to s (as the paper says, ‘C’ and ‘Enc_s’ share all conv layers) and the text. But what, in physical terms is it producing? Is it like the shared latent space in the UNIT work, or the invariant representation in Taigman? This could be made clearer with an block diagram for the architecture. \n\n(Major) Clarification on network architecture\nThe work references Tacotron’s GST work (Wang et al 2018) and the related Skerry-Ryan work as the stem architecture with separate networks for style embeddings and for content (text). While the architecture itself might be available in the stem work by Wang et al, I think we need some diagrams for the current work as well for a high level picture. Although it is mentioned in words in section 3.3, I do not get a clear idea of what the encoder/decoder architectures look like. I was also surprised in not seeing attention plots which are ubiquitous in this kind of work. Furthermore, in the notes to the ‘inference’ network ‘C’ it is stated that C and Enc_s share all conv layers. Again, a diagram might be helpful - this also applies for the discriminator. \n\nClarification on stability/mode collapse: Could the authors clarify how easily this setup trained in this adversarial setup? \n\nNote on latent representation: To put the above points in perspective, a small note on what this architecture does in regards to the meaning of the latent codes would be useful. The Taigman et al 2016 paper talks about the f-constancy condition (and 'invariance'). Likewise, in the UNIT paper by Ming-Yu Liu - which is basically a set of coupled VAEs + cycle consistency losses, there is the notion of a shared latent space. A little discussion on these aspects would make the paper much more insightful to the domain adaptation practitioner.\n\nReference: This reference - Adversarial feature matching for text generation - (https://arxiv.org/abs/1706.03850) contains a reconstruction stream (as perhaps many other papers) and might be useful for instruction. \n\nOther relevant works in speech and voice conversion: This work comes to mind, using the StarGAN setup, also containing a survey of relevant approach in voice conversion. Although the current work is for TTS, I think it would be useful to include speech papers carrying out domain adaptation for other tasks.\n\nStarGAN-VC: Non-parallel many-to-many voice conversion with star generative adversarial networks. \nhttps://arxiv.org/abs/1806.02169 \n\nI would rate this paper as being acceptable if the authors clarify my concerns, and in particular, about the architecture. It is also hard to hard to assess reproducibility in a complex architecture such as this. ", "Thank you for the clarifications. I feel that the material is now much more convincing after seeing the architectural presentation. It is illuminating to note that one can break up content and style to capture their essence as can be seen in figures 2, 3, 4 and 5 in the appendix. Fig 2 uses multiheaded attention to compute similarity between ref. embedding and randomly initialized tokens - this seems to be a new addition to the previous GST works (Skerry-Ryan et al 2018 and Wang et al 2018). \n\nOverall, This work exhibits a very high level of application - attention based seq2seq modeling with Tacotron setup, and manipulating content and style with instructive use of techniques from the formulation to the architectures used . \n\nI rule this as a clear accept.", "MOS\nWe do not think MOS is a must have metric in our paper. Other relevant papers for stylization in TTS, e.g. prosody-Tacotron also do not include a MOS evaluation. \nWe have performed a number of evaluations quantitatively and qualitatively and believe that these extensive evaluations are sufficient to validate our work. The most important thing is, in our paper, how to disentangle style and content such that the encoder learns to produce effective style latent codes is the most important claim. So other than some evaluation metrics used in regular TTS, we also performed a set of experiments that do not typically appear in TTS work. \n\nTable captions \nThanks for your suggestion, we will revise the captions.\n\nSwapping \nThe most important claim in our paper is the ability to disentangle content and style. We believe this experiment actually is most important evaluation in validating our claim. Similar experiment are typically performed in computer vision papers, e.g. ‘Disentangling factors of variation in deep representations using adversarial training, NIPS 2016’ (Fig.3). \n\nASR model\nThe ASR model is just a tool to evaluate different methods, here we just compare the relative performance. But your suggestions are good, we will add the ground truth WER in our paper.\n\nStyle transfer \nTo validate that the test follows a normal distribution would require a large amount of subjective studies. We followed the precedent in the most recent works (GST, and prosody-Tacotron).\n\nPermutations\nYes, you are right. Thanks for your carefully reading the paper, we will change this in our paper.\n\nResults \nThe results turn out that they are almost equivalent. \n\nTypos\nThanks, we will modify the typos in our paper.\n", "Title\nWe appreciate this point, and removed the TTS-GAN moniker as it is quite generic.\n\nBijective mapping\nWe agree that regular speech synthesis is not a bijective mapping problem, because it may result in multiple meaningful results. We also mentioned this in our paper (Sec. 1 ln 6-7). However, we want to clarify our claim, by saying ‘bijective’, we refer to style modeling in TTS (a conditional generation), i.e. given textual string and a reference audio sample, the synthesized audio should one-to-one correspond to the given conditions (content from text and style from reference audio). If it is not a bijective mapping, e.g. one-to-many mapping, then one textual string could map to different styles, which neglects our style condition (reference audio). We have also elaborated on our claim, which can be seen in Sec. 2 (last paragraph).\n\nStyle loss\nWith all due respect, we disagree with the reviewer that prosody cannot be captured in local variations in the time-frequency domain. In fact, certain prosodic characteristics, such as emotion, are captured by local statistics in the time-frequency domain. For example, Cheang and Pell (2008) have shown that a temporary reduction in the average fundamental frequency significantly correlates with sarcasm expression. \nMore broadly, numerous past studies on prosody have been based on spectral characteristics, e.g. Wang (2015), Soleymani, et al. (2018), Barry (2018).\nThat being said, we do agree with the reviewer that prosodic variation is often suprasegmental. Therefore, our approach to capturing speaking style can only model those prosodic variations that are characterized by local statistics. We have made this point clear in our paper in Section 3.2.\nCheang, Henry S., and Marc D. Pell. \"The sound of sarcasm.\" Speech communication 50.5 (2008): 366-381.\nKun-Ching Wang. “Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition” (2015).\nSobhan Soleymani, Ali Dabouei, et al. “Prosodic-Enhanced Siamese Convolutional Neural Networks for Cross-Device Text-Independent Speaker Verification” (2018).\nShaun Barry, Youngmoo Kim. “Style Transfer for Musical Audio Using Multiple Time-Frequency Representations”. (2018)\n\n\nReconstruction loss\nFirst, we have changed ‘I’ to ‘z_c’ to represent the latent code. \nIf z_c is categorical, then C could be a N-way classifier. So you are right, z_c is the emotion label for EMT-4, and identities for VCTK.\n‘Latent’ is commonly used in encoder-decoder networks and generative work, we do not feel it is a confusing word.\nThe training details are present in the last paragraph of this section. In Eq9, the first term is minimized over C and the second term is minimized over both C and G. The hyperparameters were empirically determined.\nDifferent datasets need different numbers of training steps (for EMT-4 we trained for 200k steps, while for VCTK, we trained our model for 280k steps).\nThe detailed description of the weights and network architecture of R can be found in our paper (last paragraph in ‘Style Loss’ section and line 4-5 in page 5).\n\nPresentation of the tables\nDue the page limitation, we prefer to present our paper in a more compact way. However, we could move elements to the appendices if necessary.", "Sorry, it misleads readers by saying in this way, we will modify our description in the paper.\n\n\n1. To clarify, we were trying to communicate that when training on purely paired data the network can easily to memorize all the information from the paired audio sample, i.e. both style and content components.\nFor example, given (txt1, aud1), the network memorizes that as long as given a txt1, the result should be aud1. In this case, the style embedding tends to be neglected by the decoder, and the style encoder cannot be optimized easily. During test stage, when given (txt1, aud2), the network still produces an audio sample very similar to aud1, and the ‘style’ is not learned well. Our experiments on style transfer validate this claim. When comparing with GST, our synthesized audio is closer to the reference style.\n\n2. Through empirical experiments we found that randomly sampling is enough for training.\n\n3. Thanks for your suggestion. When we started this work, that idea was our first basic attempt. But it turns out, by simply adding an adversarial loss on the latent space did not produce good results. The most severe problem is it is not robust to various length reference audio samples. When the reference audio is longer than the input, the synthesized samples tend to have long duplicate tails, or sometime noises. It severely impairs the audio quality. \nWe suspect that, to satisfy the correct classification, the style embedding is squeezed into the same scale, which is not robust to varied length sequential signals. The Fader Network was used for processing images which are a fixed dimension, this method does not seem to work well for audio. Therefore, in our current model, we promote the disentanglement by paire-wise training, which means we do not need to add an adversarial loss directly on the latent space, but on the generated samples. Our results show that this leads to more robust outcomes for sequential signals for different lengths. We will clarify this in the paper.\n\n4. Thanks. It is a good suggestion to replace Tacotron2 with Prosody-Tacrotron. We will modify this in our paper.\n\n5. The hyperparameters for our model can be seen in our implementation details and Appendix.\nThe parameters used for other methods are the same with their original work.\n\n6. As in this experiment, we want to evaluate how well our model can learn the latent space. In other words, are the style embeddings produced by our model effectively representing any desired style. By showing the t-SNE visualization, we can see that, the latent space learned by our model can be well separated into clusters according to the testing data distribution. The same experiment was also done in GST (Wang et al).\n\n7. We appreciate that TTS-GAN is quite general. We are happy to change the name of the paper. ", "Thank you for the comments. We have fixed the typos in our revision.", "1. Typo in p2 l2.\nThanks, we fixed it.\n\n2. Clarification on formulation:\nThank you for pointing out the discrepency. We provide detailed explanation below. In short, there is a subtle yet important distinction: We use '+' samples to regularize within-domain mapping (between (c, x_aud^+) and \\tilde{x}^+), while Taigman et al., (2016) use '-' to promote cross-domain mapping (between (c, x_aud^-) and \\tilde{x}^-)).\n\nTaigman's work use a pretrained function f(.) to extract latent embeddings from both the source and the target domains, i.e., z_s = f(s), z_t = f(t). They then use a decoder to map these to the target distribution, producing s2t and t2t. The s2t drives cross-domain mapping, while the t2t regularizes within-domain mapping. They use a single function f(.) to compute the embeddings from both the source (real human face) and the target (emoji human face) because the two domains share certain structures and properties, e.g., a face has two eyes with eyebrows on top. This makes t2t -- within-domain mapping -- relatively easy compared to ours (see below on why); so they include the target term in the loss (Eqn 3 in [Taigman et al., 2016]) to further promote cross-domain mapping.\n\nIn our work, making the analogy, the source domain is '(content, style+)' and the target is '(content, style-)'. Both domains consist of two input modalities (text and sound) with very different characteristics. So we use two functions to represent each domain: Enc_c and Enc_s. Unfortunately, this makes it difficult to even ensure that within-domain mapping is successful. So, to strengthen within-domain mapping we modify the last term of the tenary discriminator to have x_aud^+ instead of the target x_aud^-. \n\n3. Clarification on reconstruction loss:\nYes, both the content c = f(x_txt) and the style s = g(x_aud^+) embeddings are deterministic. The only stochasticity comes from the data distribution. We revised the notation in the paper; please take a look. \n\n4. Clarification on latent reconstruction loss: \nWe have revised our paper with network architecture details, including a block diagram of the Inference Network 'C' that computes the latent representation 'l'; see Figure 3. The inference network is simply the style encoder (Enc_s) with a new classifier on top (one FC layer followed by softmax); all the weights are shared between C and Enc_s except for the new classifier layer.\n\nWe agree that 'z' is a more commonly used notation to represent latent codes. We have changed the notation in the paper; thanks for the suggestion! \n\n5. Clarification on network architecture\nWe have revised our paper with block diagrams of our network architecture as well as parameter settings used in our implementation (Figure 3 to 5). We have also included an attention plot (Figure 6), showing the robustness of our approach to the length of the reference audio.\n\n6. Clarification on stability/mode collapse:\nIn TTS stylization, when mode collapse happens the synthesized voice samples will exhibit the same acoustic style although different reference audio samples are provided. While it is difficult to entirely prevent the mode collapse from ever happening (as is common in GAN training), we have a number of measurements (i.e., different loss terms in our adversarial & collaborative game) to alleviate the issue and to improve stability during training. Our qualitative results show more diverse synthesized samples than Tacotron-GST when different reference audio samples are given, suggesting our work clearly improves upon the state-of-the-art. Our learning curve (https://researchdemopage.wixsite.com/tts-gan/image) also suggests that training with our loss formulation is relatively stable, i.e., the three loss values seem to converge to a stable regime.\n\n7. Note on latent representation:\nPerhaps the most important message we want to deliver is: We are improving upon content vs. style disentanglement in acoustic signals by means of adversarial & collaborative learning. Extracting ``acoustic styles'' such as prosody has been an extremely difficult task. The state-of-the-art GST achieves this with an attention mechanism. But, as we argue in our paper, their loss construction makes it difficult to ``wipe out'' content information from acoustic signals; this is also shown in their qualitative results where prosody style transfer fails when the length of the reference audio clip is different from what is appropriate for the content to be synthesized. Our novel loss construction enables careful conditioning of our model so that the two latent representations, content 'c' and style 's' embeddings, become more precise than the previous method could obtain. In particular, our paired and unpaired input forumation, and the adversarial & collaborative game makes our model better condition the latent space so that the content information is effectively ignored in style embedding vectors. \n\n8. Reference:\nWe have incorporated those references in our revision. ", "This paper proposes to use a generative adversarial network to model speaking style in end-to-end TTS. The paper shows the effectiveness of the proposed method compared with Takotron2 and other variants of end-to-end TTS with intensive experimental verifications. The proposed method of using adversarial and collaborative games is also quite unique. The experimental part of the paper is well written, but the formulation part is difficult to follow. Also, the method seems to be very complicated, and I’m concerning about the reproducibility of the method only with the description in Section 3.\n\nComments\n- Page 2, line 2: x _{aud} ^{+} -> x _{aud} ^{-} (?)\n- Section 2: $T$ is used for audio and the number of words.\n" ]
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_ByzcS3AcYX", "BJl11ws2AX", "B1lxfzs30Q", "rJe7lY3KCQ", "BygbTKVoR7", "iclr_2019_ByzcS3AcYX", "rkeebN1oR7", "ByejWaRc0Q", "HJgZD2nYRQ", "Hklxr33Y0m", "iclr_2019_ByzcS3AcYX", "BJxRuqPQam", "Hklxr33Y0m", "r1esueVMp7", "HyeiSBJQT7", "Byg1SllT37", "SJeKU5Ic27", "iclr_2019_ByzcS3AcYX" ]
iclr_2019_H1MW72AcK7
Optimal Control Via Neural Networks: A Convex Approach
Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5 times less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
accepted-poster-papers
The paper makes progress on a problem that is still largely unexplored, presents promising results, and builds bridges with prior work on optimal control. It designs input convex recurrent neural networks to capture temporal behavior of dynamical systems; this then allows optimal controllers to be computed by solving a convex model predictive control problem. There were initial critiques regarding some of the claims. These have now been clarified. Also, there is in the end a compromise between the (necessary) approximations of the input-convex model and the true dynamics, and being able to compute an optimal result. Overall, all reviewers and the AC are in agreement to see this paper accepted. There was extensive and productive interaction between the reviewers and authors. It makes contributions that will be of interest to many, and builds interesting bridges with known control methods.
val
[ "Hke3H7li1N", "B1ewimxjkV", "BkgQZvtByN", "S1ljLTYjnm", "SJeInZNZR7", "r1gu-tJJRQ", "r1lZ9YkkC7", "r1lKWukkCQ", "B1e8JrJ1Am", "S1eqHIJyCm", "HklIprkJRm", "H1eRNOBqn7", "rJxXxUZ92X", "ryxdDdm3F7" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "We thank the reviewer again for carefully reading our manuscript and providing so many valuable feedbacks. We address reviewer’s concern as follows:\n\n1) One limitation that is now present in the revised version of the paper that was not present in the original submission is that these dynamics models do *not* subsume linear dynamics models. This is because the dynamics are being approximated with convex and non-decreasing functions over the state space, while linear models *are* able to model decreasing functions over the state space while retaining convexity of the overall control problem. I would like an updated version of this paper to highlight this limitation of the method as I expect it to hurt some applications (although it is fine in other contexts).\n\nWe thank the reviewer for this insightful comment and we agree that the proposed input convex neural networks do not subsume linear dynamics models completely. Specifically, the proposed ICNN/ICRNN could only capture the dynamics convex and non-decreasing over the state space. But since we are not restricting the control space (system state at any time can be viewed as a function of the initial system state and all previous control inputs if one unrolls the system dynamics equation entirely), and we have explicitly included multiple previous states in the state transition dynamics $s_t = g(s_{t-n_w:t-1}, u_{t-n_w:t})$, so the non-decreasing constraint should not hurt the representation capacity by much.\n\nIn the revised manuscript, we add the following discussion in page 5 under Eq. (5) to emphasize the differences between input convex neural networks and linear models. “Note that as a general formulation, we do not include the duplication tricks on state variables, so the dynamics fitted by Eq. (5b) and (5c) are non-decreasing over state space, which are not equivalent to those dynamics represented by linear systems. However, since we are not restricting the control space (dynamics can be both increasing or decreasing on control variables), and we have explicitly included multiple previous states in the system transition dynamics, so the non-decreasing constraint over state space should not restrict the representation capacity by much. In Section 3, we theoretically prove the representability of proposed networks.”\n\n2) I think it should be made clearer that the motivation of the ‘input duplication trick’ over the control space is to restrict the networks to be non-decreasing over the state space while not restricting the control space. I think the duplicated inputs unnecessarily complicates the presentation at parts e.g. the end of Section 2.1.\n\nWe thank the reviewer for pointing out this misleading presentation at the end of Section 2.1 about the duplication. In the revised manuscript, we write out explicitly the convex and non-decreasing properties over the expanded control variable \\hat{u} rather than u. ", "3) I have a minor concern/question on the locomotion experiment: If I understand it correctly, there are two differences to Nagabandi et al.: the dynamics are input-convex, and the inference procedure uses gradient descent over the action space of the control problem that Nagabandi does with just random search. It's not clear which one of these is improving the accuracy, as taking a few gradient steps over the non-convex MPC problem in Nagabandi et al. is also reasonable and would likely also improve the performance, even if the true optimum is not reached. Did you try comparing to this as a baseline?\n\nWe agree with the reviewer’s insights that both convexity and gradient descent steps could bring benefits in the locomotion tasks. Similar to the energy consumption case, we also tried to directly do gradient steps using a normal neural network to represent dynamics. It would improve the reward than the random search method but the reward is lower than our proposed ICNN method. We are still working on more experiments to gain a deeper understanding of the contributions from these two factors, and the general tradeoff between model accuracy and solution traceability in model predictive control (MPC) would be an important direction for our future work. Moreover, since of computation efforts of Nagabandi et al’s method have been taken on the random search step, to make a fair comparison in the paper we only show the comparison with their original method on the task rewards and computation time.\n\n4) On the energy consumption experiment, is the RC model a linear dynamics model that only looks at single-step states and actions as g(x_t, u_t)? If so, comparing an ICNN that uses a previous trajectory as g(x_{t-n_w:t}, u_{t-n_w,t}) to this seems somewhat unfair as another reasonable baseline would be a linear model that also uses the previous trajectory. It could be interesting to include some portions in the paper about the ways you've seen the non-convex MPC with the RNN dynamics model fail that the ICNN model overcomes. For example, does the lack of smoothness cause the control problem to get stuck in bad local minimum?\n\nThe RC model is linear dynamics using single-step states, yet has been used as a standard method in building energy management. In order to demonstrate where the non-convex MPC with the RNN dynamics model fails and the ICNN model overcomes, we actually included a comparison of the control performance of ICRNN with normal RNN in Figure. 4. We think the most interesting result is shown in Fig. 4(c). By using ICRNN, the final control actions (in red) is stable, while control signals founds by normal RNN (in green) have many oscillations, which seems to be stuck in some local minima, and such drastic control variations is not desirable for physical system control.\n\n5) As a minor comment, please add parenthetical citations where appropriate to the paper.\n\nWe thank the reviewer for this helpful suggestion and we have modified the citation format in the revised paper. 
", "Thanks for the thorough response and revised version of the paper.\nThe updates are commendable and I apologize for the delays from my end\nas I needed the time to thoroughly look over the new manuscript.\nI have updated my score from a 1 to a 6.\nThe revised paper no longer has the significant errors with convexity\nthat I found in my original review and I think that the models,\nexperimental tasks, and analysis provide a useful contribution\nto the community.\n\nOne limitation that is now present in the revised version of the\npaper that was not present in the original submission is that\nthese dynamics models do *not* subsume linear dynamics models.\nThis is because the dynamics are being approximated with\nconvex and non-decreasing functions over the state space,\nwhile linear models *are* able to model decreasing functions\nover the state space while retaining convexity of the overall\ncontrol problem. I would like an updated version of this paper\nto highlight this limitation of the method as I expect it to\nhurt some applications (although it is fine in other contexts.)\n\nOn the presentation of the work, I think it should be made clearer\nthat the motivation of the \"input duplication trick\" over the\ncontrol space is to restrict the networks to be non-decreasing\nover the state space while not restricting the control space.\n\nI think the duplicated inputs unnecessarily complicates the\npresentation at parts, such as the borderline-misleading\nstatement at the end of Section 2.1 that says:\n\n Note that such construction guarantees that the\n network is convex and non-decreasing with respect\n to the expanded inputs \\hat u = [u -u]\n\nThis part is almost misleading because the model is\n*not* non-decreasing with respect to the controls u,\n\nI have a minor concern/question on the locomotion experiment:\nIf I understand it correctly, there are two differences to\nNagabandi et al.:\n\n1) The dynamics are input-convex, and\n2) The inference procedure uses gradient descent over\n the action space of the control problem that\n Nagabandi does with just random search\n\nIt's not clear which one of these is improving the accuracy,\nas taking a few gradient steps over the non-convex MPC problem\nin Nagabandi et al. is also reasonable and would likely also\nimprove the performance, even if the true optimum is not reached.\nDid you try comparing to this as a baseline?\n\nOn the energy consumption experiment, is the RC model a\nlinear dynamics model that only looks at single-step\nstates and actions as g(x_t, u_t)?\nIf so, comparing an ICNN that uses a previous trajectory\nas g(x_{t-n_w:t}, u_{t-n_w,t}) to this seems somewhat\nunfair as another reasonable baseline would be a linear\nmodel that also uses the previous trajectory.\n\nIt could be interesting to include some portions in the paper\nabout the ways you've seen the non-convex MPC with the RNN\ndynamics model fail that the ICNN model overcomes.\nFor example, does the lack of smoothness cause the control problem to\nget stuck in bad local minimum?\n\nI still see section 3 as being very out-of-place within the\nbroader context of this paper and I have not reviewed this\nportion of the paper.\n\nAs a minor comment, please add parenthetical citations where\nappropriate to the paper.", "This is a well-motived paper that considers bridging the gap\nin discrete-time continuous-state/action optimal control\nby approximating the system dynamics with a convex model class.\nThe convex model class has more representational power than\nlinear model classes while likely being more tractable and\nstable than non-convex model classes.\nThey show empirical results in Mujoco continuous-control\nenvironments and in an HVAC example.\n\nI think this setup is a promising direction but I have\nsignificant concerns with some of the details and claims\nin this work:\n\n1. Proposition 2 is wrong and the proposed input-convex recurrent\n neural network architecture not input-convex.\n To fix this, the D1 parameters should also be non-negative.\n To show why the proposition is wrong, consider the convexity of y2\n with respect to x1, using g to denote the activation function:\n\n z1 = g(U x1 + ...)\n y2 = g(D1 z1 + ...)\n\n Thus making\n\n y2 = g(D1 g(U x1 + ...) + ...)\n\n y2 is *not* necessarily convex with respect to x1 because D1 takes\n an unrestricted weighted sum of the convex functions g(U x1 + ...)\n\n With the ICRNN architecture as described in the paper not being\n input-convex, I do not know how to interpret the empirical findings\n in Section 4.2 that use this architecture.\n\n2. I think a stronger and more formal argument should be used to show\n that Equation (5) is a convex optimization problem as claimed.\n It has arbitrary convex functions on the equality constraints that\n are composed with each other and then used in the objective.\n Even with parts of the objective being convex and non-decreasing\n as the text mentions, it's not clear that this is sufficient when\n combined with the composed functions in the constraints.\n\n3. I have similar concerns with the convexity of Equation (6).\n Consider the convexity of x3 with respect to u1, where g is\n now an input-convex neural network (that is not recurrent):\n\n x3 = g(g(x1, u1), u2)\n \n This composes two convex functions that do *not* have non-decreasing\n properties and therefore introduces an equality constraint that\n is not necessarily even convex, almost certainly making the domain\n of this problem non-convex. I think a similar argument can be\n used to show why Equation (5) is not convex.\n\nIn addition to these significant concerns, I have a few other\nminor comments.\n\n1. Figure 1 hides too much information. It would be useful to know,\n for example, that the ICNN portion at the bottom right\n is solving a control optimization problem with an ICNN as\n part of the constraints.\n\n2. The theoretical results in Section 3 seem slightly out-of-place within\n the broader context of this paper but are perhaps of standalone interest.\n Due to my concerns above I did not go into the details in this portion.\n\n3. I think more information should be added to the last paragraph of\n Section 1 as it's claimed that the representational power of\n ICNNs and \"a nice mathematical property\" help improve the\n computational time of the method, but it's not clear why\n this is and this connection is not made anywhere else in the paper.\n\n4. What method are you using to solve the control problems in\n Eq (5) and (6)?\n\n5. The empirical setup and tasks seems identical to [Nagabandi et al.].\n Figure 3 directly compares to the K=100 case of their method.\n Why does Fig 6 of [Nagabandi et al.] have significantly higher rewards\n for their method, even in the K=5 case?\n\n6. In Figure 5, f_NN seems surprisingly bad in the red region of the\n data on the left side. Is this because the model is not using\n many parameters? What are the sizes of the networks used?", "The detailed reviews and responses are commendable. Thanks to all.\n\nReviewers: can you comment on whether the revised-paper and author responses have addressed your concerns?\nIn particular, for reviewer 1, this would be important. Note that the revised version can also be viewed in a way that lets one easily see the differences.\n\n-- area chair\n", "We are grateful to the reviewer for thoroughly reading our paper and providing these encouraging\nwords. Below, we respond to the comments in detail.\n\n1) For Lemma 1 and Theorem 1, I wonder whether similar results can be established for non-convex functions. Intuitively, it seems that as long as assuming Lipschiz continuous, we can always approximate a function by a maximum of many affine functions, no matter it is convex or not. Is this right or something is missing?\n\t\nThis is an interesting and subtle question. If we restrict ourselves to “maximum” of affine functions, then we cannot construct functions that are not convex. This is from the fact that a pointwise max of convex functions (which include affine functions) is convex. As the reviewer points out, if we allow other types of operations, we can construct other types of functions. For example, if we change the pointwise max to the pointwise min, then we can approximate all Lipschiz concave functions. If we allow both max and min, we get all Lipschiz functions, but this just recover the result that neural networks can approximate most function types. We anticipate that different applications may require different function types to be approximated, and this is an active research direction for us. \n\n2) In the main paper, all experiments were aimed to address ICNN and ICRNN have good accuracy, but not they are easier to optimize due to convexity. In the abstract, it is mentioned \"... using 5X less time\", but I can only see this through appendix. A suggestion is at least describing some results on the comparison with training time in the main paper.\n\nWe thank the reviewer for pointing out this piece of missing information on running time in the main text. In the revised manuscript, we have added discussions on computation time in Section 4.1 to show our controller design would achieve both computation efficiency and performance improvement.\n\n3) In Appendix A, it seems the NN is not trained very well as shown in the left figure. Is this because the number of parameters of NN is restricted to be the same as in ICNN? Do training on both spend the same resource, ie, number of epoch? Such descriptions are necessary here.\n\nIn the toy example on classifying points in 2D grid, we used a 2-layer neural networks for both conventional neural networks and ICNN, with 200 neurons in each layer. We simulate the case when training data is small (100 training samples). We observe the results given by conventional neural networks are quite unstable by using different random seeds and are prone to be overfitting. On the contrary, by adding constraints on model weights to train the ICNN, fitting result is better using this small-size training data, while the learned landscape is also beneficial to the optimization problem. \nIn the revised manuscript, we added more details on the model and training setup, the learning task, and the optimization task to address the confusion.\n\n4) In Table 2 in appendix, why the running time of ICNN increases by a magnitude for large H in Ant case?\n\nWe apologize for the typo in the case of Ant for computation time and the confusion it caused. We wanted to report everything in minutes but forgot to convert the time for the Ant case from seconds to minutes. In the revised version we have unified the running time under minutes. \n\nWe also thank the reviewer for carefully proofreading the paper and extracting the typos.\n", "We would like to thank all the reviewers for their constructive comments. We have responded to each reviewer’s comments individually, and in summary, we have made the following clarifications or changes:\n-Clarify the inputs and constraints on the input convex neural network weights \n-Update texts, equations accordingly to avoid notation confusions\n-Explain and add details for simulation results.\n", "We thank the reviewer for the encouraging words, and we are also expecting our work would be able to serve as an efficient framework for incorporating deep learning into real-world control problems. The reviewer’s comments on the robustness of proposed convex MPC are also quite valuable. We would try to explore in details about the learning errors and control robustness in the future work. \n\nHere are some responses to the reviewer’s comments:\n\n-The miss-replacement of Figure in Page 18\nWe thank the reviewer for pointing this out. In the revised version, we have added the fitting result comparison (Fig. 7 in Appendix D.4) for ICNN and a normal neural network, which shows that ICNN is able to learn the MuJoCo dynamics efficiently.\n\n-The comparison with end-to-end RL approach\n\nWe thank the reviewer for this helpful suggestion. Conventional end-to-end RL approach directly learns the mapping from observations to actions without learning a system dynamics model. Such algorithms could achieve better performances, but are at the expense of much higher sample complexity. The model-free approach we compare with is the rllab implementation of trust region policy optimization (TRPO) [JL], which has obtained state-of-the-art results on MuJuCo tasks. We added Fig. 9 in Appendix D of the revised paper to compare our results with TRPO method and random shooting method [Nagabandi et al]. TRPO suffers from very high sample complexity and often requires millions of samples to achieve good performance. But here we only provided very few rollouts (since for physical system control, the sample collection might be limited by real-time operations, or it is difficult to explore the whole design space because suboptimal actions would lead to disastrous results), therefore, the performance by ICNN is much better than TRPO. Similarly to the model-based, model-free (MBMF) approach mentioned in [Nagabandi et al], the learned controller via ICNN could also provide good rollout samples and serve as a good initialization point for model-free RL method. \n\n\nReferences\n[JL] Schulman, John, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. \"Trust region policy optimization.\" In International Conference on Machine Learning, pp. 1889-1897. 2015.\n", "We are grateful to the reviewer for carefully reading our paper and providing many helpful suggestions and comments that have significantly improved the revised version. We also appreciate the opportunity to clarify our presentation of theorems, figures and experiment setups, as well as some unclear writing in the manuscript. We agree with the reviewer that the original manuscript contained several parts that were not clear and some typos, and it resulted in some confusion on the technical results. Overall, we note that the results of the paper remain unchanged: deep (recurrent) neural networks can be made input convex and effectively used in control of complex systems. Based on the comments made by the reviewers, we have made the figures more illustrative, and the formulations and the theorems more rigorous. Our implementation of the algorithms is consistent with the updated manuscript, so we stress that these changes are made to clarify the writing of the paper and all of the simulation and numerical results remain unchanged. Below we provide a point-by-point account of the comments. \n\n-\tConcerns on the correctness of Proposition 2\n\nWe thank the reviewer for bringing up this important question and agree this was a point of confusion in our original manuscript. In Proposition 2 of the original submission, we stated that we only need to keep V and W non-negative and this will result in a network that is convex from input to output. This is true for a single step, but as the reviewer correctly points out, negative weights cannot go through composition and maintain convexity. Actually, Proposition 1 and Proposition 2 in our original submission give the sufficient condition for a network to be input-convex for a single step; when used for control purpose, these network structures (both ICNN and ICRNN) should be modified to their equivalent variants: restricting all weight matrices to be non-negative (element-wise) and augmenting the input to include its negation. Such network structure variants and “duplicate trick” have been mentioned in Section 3.1 Sketch of proof for Theorem 1 in our original manuscript, “We first construct a neural network with ReLU activation functions and both positive and negative weights, then we show that the weights between different layers of the network can be restricted to be nonnegative by a simple duplication trick. Specifically, since the weights in the input layer and passthrough layers can be negative, we simply add a negation of each input variable (e.g. both x and −x are given as inputs) to the network”. We apologize for not making this point clear and the notational confusions in our previous manuscript. To clarify, for both the MuJoCo locomotion tasks and the building control experiments, we used the modified input-convex network structures with all weights non-negative and input negation duplicates instead of the conventional input-convex structure for single step (but these two structures could be equivalently transformed). \nIn the revised paper, we explicitly explain the sufficient conditions for ICNN/ICRNN variants that can be used for control purpose. We also update Proposition 1 and 2 to ease the confusions of convexity under control settings. Also, we have updated Figure 2 accordingly to demonstrate the modified ICNN/ICRNN structure, input duplication, operations and activation functions used for our control settings. For all the empirical experiments, we will release our code after the openreview process for result validation, which demonstrated that proposed control framework via input-convex networks obtain both good identification accuracies and better control performance compared with regular neural networks or linear models. \n\n\n", "- More information should be added to the last paragraph of Section 1 as it's claimed that the representational power of ICNNs and \"a nice mathematical property\" .\n\nWe have added the following discussion to the end of Section 1: Our method enjoys computational efficiency in two perspectives. Firstly, as stated in Theorem 2, compared to model based method which often employs piecewise linear functions, we could train ICNN or ICRNN (with exponentially less variables/pieces) using off-the-shelf deep learning packages such as PyTorch and Tensorflow, while the optimal control can be achieved by solving convex optimization; Secondly, compared to model-free (deep) reinforcement learning algorithms, which usually takes an end-to-end settings and requires lots of samples and long training time, our model is learning and controlling based on the system dynamics – this can be much more sample efficient. There is also an ongoing debate on the model-free and model-based reinforcement learning algorithms [BR], and we look forward to incorporating learning into control tasks with optimality guarantees.\n\n-\tWhat method are you using to solve the control problems in Eq (5) and (6)?\n\nIn Eq (5) and (6), since both the objectives and the constraints contain neural networks, we set up our networks with Tensorflow and solve the control problem using projected gradient descent method with adaptive step size. The gradients can be calculated via existing modules in Tensorflow for backpropagation. In both cases the optimization problems can be solved fairly fast and we observe the solution convergence after around 20 iterations. As shown in Figure 4 (c) of the original manuscript, the control actions outputted by solving (5) are stable and much better than the results achieved from regular neural network + MPC (which has no optimality guarantee). In the revised paper, we have included more details on the solution algorithm of Eq. (5) in the last paragraph of Section 2.\n\n-\tWhy does Figure 6 of [Nagabandi et al.] have significantly higher rewards for their method, even in the K=5 case?\n\nWe thank the reviewer for carefully proofreading the figure, and we also thank Nagabandi et al open sourced their code. We re-run their simulations using all the default parameters and observed the reward for their cases are all around 10x less than they showed in Figure 6. We also refer to [KC], where their rewards on swimmer case are significantly smaller than the case as [Nagabandi et al]. We are not sure what is causing the difference in the performances, although we believe there may be difference in hyperparameter settings and the random starting points between our and [Nagabandi et al]’s result. We make sure the comparison in Figure 3 of our paper are using the same hyperparameters and training data, except the differences on control methods.\n\n-\tThe experimental results in Figure 5\n\nWe thank the reviewer for bringing up this interesting point. In the toy example on classifying points in 2D grid, we used a 2-layer neural networks for both conventional neural networks and ICNN, with 200 neurons in each layer. We simulate the case when training data is small (100 training samples). We observe the results given by conventional neural networks are quite unstable by using different random seeds and are prone to be overfitting. On the contrary, by adding constraints on model weights to train the ICNN, fitting result is better using this small-size training data, while the learned landscape is also beneficial to the optimization problem. \nIn the revised manuscript, we added more details on the model and training setup, the learning task, and the optimization task to address the confusion.\n\nReferences\n\n[MP] Alessandro Magnani and Stephen P Boyd. “Convex piecewise-linear fitting”. Optimization and Engineering, 10(1):1–17, 2009. \n\n[BR] Recht, Benjamin. \"A tour of reinforcement learning: The view from continuous control.\" arXiv preprint arXiv:1806.09460(2018).\n\n[KC] Kurutach T, Clavera I, Duan Y, Tamar A, Abbeel P. “Model-Ensemble Trust-Region Policy Optimization”. arXiv preprint arXiv:1802.10592. 2018 Feb 28.\n", "-\tA stronger and more formal argument should be used to show that Equation (5) is a convex optimization problem as claimed.\n\nWe thank the reviewer for this helpful suggestion and agree that a more rigorous argument should be used to show that Equation (5) is a convex optimization problem. In the revised manuscript, we update Equation (5) to reflect the fact that we are using input-convex neural networks with all non-negative weights and the input negation trick. Equations (5d) and (5e) are added in the revised formulation which denote the augmented input variables and the consistency condition between u and its negation v.\n\nThen, in order to show Equation (5) is a convex optimization problem, we need to both the objective function and constraints are convex. Specifically,\n(i). The objective function J(\\hat{x},y) (Equation(5a)) is convex, non-decreasing with respect to \\hat{x} and y;\n(ii). The functions f and g are parameterized as ICRNNs with all weight matrices non-negative, which ensures f and g are convex and non-decreasing. Therefore rolling it out over time, the compositions remain convex with respect to the input.\n(iii). The consistency constraint (5e) that one variable is the negation of the other is linear, therefore it preserves the convexity of optimization problems.\n\nWe have clarified this discussion in the revised manuscript.\n\n-\tConvexity on Equation (6)\n\nAs a similar case to the optimization problem in Equation (5), the system dynamics is governed by Equation (6b). By restricting all weight matrices in ICNN to be non-negative and expanding the inputs, the MPC formulation for MuJuCo case is convex with respect to control action vectors at different time. As shown in Fig. 3, such convex properties also guaranteed that our results on a series of control tasks outperformed current neural network based dynamical model. \n\n\nResponse to reviewer’s other comments:\n-\tFigure 1 hides too much information\nWe agree and have revised Figure 1 to include more information about problem setup related to modeling objective, control objective and constraints. In the left plot of revised Figure 1, we describe how an input convex neural network can be trained to learn the system dynamics. Then the right plot demonstrates the overall control framework, where we solve a convex predictive control problem to find the optimal actions. The optimization steps are also based on objectives and dynamics constraints represented by the trained networks.\n\n-\tThe theoretical results in Section 3 seem slightly out-of-place within the broader context of this paper\n\nWe thank the reviewer for this question. The key idea for this section is by making the neural network convex from input to output, we are able to obtain both good predictive accuracies and tractable computational optimization problems. There are two main results presented in Section 3, Theorem 1 is about the representation capacity of ICNN (can represent all convex functions) and Theorem 2 is on the representation efficiency of ICNN (can be exponentially more efficient than conventional convex piecewise linear fitting [MP]). \n\nSince our proposed control framework involves two stages: 1) using ICNN/ICRNN for system identification; 2) design an optimal controller via solving a predictive control problem. For the system identification stage, obviously, one benefit of using input convex networks (instead of conventional neural networks) is its computational trackability and optimality guarantee for the subsequent optimization stage. However, besides the trackability, reasonable representation capacity to model complex relationships is also desired as a system identification model. Theorem 1 and 2, on this aspect, demonstrate such representability and efficiency of ICNN. In the revised manuscript, we have added the above discussion at the beginning of Section 3 to improve the coherence of the paper.\n", "This paper proposes to use input convex neural networks (ICNN) to capture a complex relationship between control inputs and system dynamics, and then use trained ICNN to form a model predictive control (MPC) problem for control tasks.\nThe paper is well-written and bridges the gap between neural networks and MPC.\nThe main contribution of this paper is to use ICNN for learning system dynamics. ICNN is a neural network that only contains non-negative weights. Thanks to this constraint, ICNN is convex with respect to an input, therefore MPC problem with an ICNN model and additional convex constraints on control inputs is a convex optimization problem.\nWhile it is not easy to solve such a convex problem, it has a global optimum, and a gradient descent algorithm will eventually reach such a point. It should also be noted that a convex problem has a robustness with respect to an initial starting point and an ICNN model itself as well. The latter is pretty important, since training ICNN (or NN) is a non-convex optimization, so the parameters in trained ICNN (or NN) model can vary depending on the initial random weights and learning rates, etc. Since a convex MPC has some robustness (or margin) over an error or deviation in system dynamics, while non-convex MPC does not, using ICNN can also stabilize the control inputs in MPC.\nOverall, I believe that using ICNN to from convex MPC is a sample-efficient, non-intrusive way of constructing a controller with unknown dynamics. Below are some minor suggestions to improve this paper.\n\n-- Page 18, there is Fig.??. Please fix this.\n-- In experiments, could you compare the result with a conventional end-to-end RL approach? I know this is not a main point of this paper, but it can be more compelling.", "The paper proposes neural networks which are convex on inputs data to control problems. These types of networks, constructed based on either MLP or RNN, are shown to have similar representation power as their non-convex versions, thus are potentially able to better capture the dynamics behind complex systems compared with linear models. On the other hand, convexity on inputs brings much convenience to the later optimization part, because there are no worries on global/local minimum or escaping saddle points. In other words, convex but nonlinear provides not only enough search space, but also fast and tractable optimization. The compromise here is the size of memory, since 1) more weights and biases are needed to connect inputs and hidden layers in such nets and 2) we need to store also the negative parts on a portion of weights. \n\nEven though the idea of convex networks were not new, this work is novel in extending input convex RNN and applying it into dynamic control problems. As the main theoretical contribution, Theorem 2 shows that to have same representation power, input convex nets use polynomial number of activation functions, compared with exponential from using a set of affine functions. Experiments also show such effectiveness. The paper is clearly and nicely written. These are reasons I suggest accept.\n\n\nQuestions and suggestions:\n\n1) For Lemma 1 and Theorem 1, I wonder whether similar results can be established for non-convex functions. Intuitively, it seems that as long as assuming Lipschiz continuous, we can always approximate a function by a maximum of many affine functions, no matter it is convex or not. Is this right or something is missing?\n\n2) In the main paper, all experiments were aimed to address ICNN and ICRNN have good accuracy, but not they are easier to optimize due to convexity. In the abstract, it is mentioned \"... using 5X less time\", but I can only see this through appendix. A suggestion is at least describing some results on the comparison with training time in the main paper.\n\n3) In Appendix A, it seems the NN is not trained very well as shown in the left figure. Is this because the number of parameters of NN is restricted to be the same as in ICNN? Do training on both spend the same resource, ie, number of epoch? Such descriptions are necessary here.\n\n4) In Table 2 in appendix, why the running time of ICNN increases by a magnitude for large H in Ant case?\n\n\nTypos:\n\tPage 1 \"simple control algorithms HAS ...\"\n\tPage 7 paragraph \"Baselines\": \"Such (a) method\".\n\tIn the last line of Table 2, 979.73 should be bold instead of 5577.\n\tThere is a ?? in appendix D.4.\n\t\n", "This paper proposed a powerful tool for an important category of model based control system. For model based control, there are usually two steps: 1. modeling the system as accurate as you can, and 2. Optimize over the fitted system to find the best control strategy. It is known that convex optimization is always tractable, so if the true system can be convex, or almost exactly regressed by a convex function, we can make use of it. However, for modeling complex systems, where neural network becomes more popular, there is no guarantee that step 1 outputs a convex system -- even if the system is convex, we do not know whether the model is convex unless we can prove that they are close enough, which is usually difficult. So paper such as https://arxiv.org/abs/1708.02596 use a non-convex model and have to search for the control strategy by griding and testing the entire space almost exhaustedly.\n\nThis paper propose an NN structure which is simply based on only Relu, but guarantees a convex modeling of the system. Compared with maximum piecewise linear modeling, it only introduces a fixed 2 piece linear activating module, but dramatically decreases the number of parameters from exponential to polynomial, which makes it realizable.\n\nBut if it's true, now something interesting might come up. The authors show that even for seemingly hard scenarios including Mujoco, it performances well to attempt a convex model, whose optimizer corresponds to good control scheme in true problems. It likely means that, despite the impossibility to show the model is everywhere accurate, the model well sketches the landscape of true loss around its minimizer (where is probably locally convex) and the trajectory of optimizing iterations, where people are interested. I'm not sure if that's true. NN always surprise people, but it's definitely worth rethinking and experimenting, based on the result on such complex tasks in this paper." ]
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 7, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "BkgQZvtByN", "Hke3H7li1N", "S1eqHIJyCm", "iclr_2019_H1MW72AcK7", "iclr_2019_H1MW72AcK7", "rJxXxUZ92X", "iclr_2019_H1MW72AcK7", "H1eRNOBqn7", "S1ljLTYjnm", "HklIprkJRm", "B1e8JrJ1Am", "iclr_2019_H1MW72AcK7", "iclr_2019_H1MW72AcK7", "iclr_2019_H1MW72AcK7" ]
iclr_2019_H1MgjoR9tQ
CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model
Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
accepted-poster-papers
This paper presents CMOW—an unsupervised sentence representation learning method that treats sentences as the product of their word matrices. This method is not entirely novel, as the authors acknowledge, but it has not been successfully applied to downstream tasks before. This paper presents methods for successfully training it, and shows results on the SentEval benchmark suite for sentence representations and an associated set of analysis tasks. All three reviewers agree that the results are unimpressive: CMOW is no better than the faster CBOW baseline on most tasks, and the combination of the two is only marginally better than CBOW. However, CMOW does show some real advantages on the analysis tasks. No reviewer has any major correctness concerns that I can see. As I see it, this paper is borderline, but narrowly worth accepting: As a methods paper, it presents weak results, and it's not likely that many practitioners will leap to use the method. However, the method is so appealingly simple and well known that there is some value in seeing this as an analysis paper that thoroughly evaluates it. Because it is so simple, it will likely be of interest to researchers beyond just the NLP domain in which it is tested (as CBOW-style models have been), so ICLR seems like an appropriate venue. It seems like it's in the community's best interest to see a method like this be evaluated, and since this paper appears to offer a thorough and sound evaluation, I recommend acceptance.
train
[ "Hyld2_TykV", "H1e3m8gvRm", "Byg69SmWhQ", "H1eTBddx0X", "SJe4Q3-x07", "BkxNis-xC7", "rJl-95bx0X", "Bkgah1-i37", "rkxCqDVshm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer,\n\nDue to its brevity, your comment leaves a lot of room for interpretation, and we are not sure if we understood your concerns correctly. Nevertheless, we would like to address them in the following.\n\nTo our understanding, your main concerns with our paper now is that\n\n1. We study a problem that is too “specific”, i.e., not of interest for a broad audience at ICLR.\n2. Our contribution is too “specialized”, i.e., not of large value for the considered problem.\n3. The technique is too “specific”, i.e., not well studied and thus does not integrate well with common approaches.\n\nWe strongly disagree with you on all three of these points. \n\n1.\nSentence representation learning has been widely studied in recent years, mainly because it has many important applications. In our paper alone we consider more than 10 supervised downstream tasks and several unsupervised downstream tasks.\nThe importance of this topic for ICLR can also be seen from the fact that more than a dozen submissions have “Sentence embedding”, “sentence representation”, or “sentence encoding” in their title alone!\nObviously, being concerned with unsupervised representation learning for NLP, our paper is also very well in line with ICLR 2019’s CfP.\n\n2.\nOur contribution is to introduce order awareness to efficient, linear word embedding based models. Given that semantics of natural language are inherently order dependent, this has considerable consequences for the expressiveness of the resulting model. \nHence, our paper can not be considered a “specialized” contribution.\n\n3.\nCMOW is based on the Compositional Matrix Space Model, which is a rather old idea that indeed has not been studied much. However, due to its conceptual similarity to CBOW, some knowledge is transferable, such that it can be trained in a similar fashion as CBOW. \n\nOur hybrid CBOW-CMOW model extends and improves upon CBOW, which is arguably the most popular baseline for sentence representation. It consists of a simple concatenation of CBOW and CMOW at the embedding level. As such, CMOW can be integrated with existing CBOW approaches easily.\n\nTo get a word order aware text embedding model, the obvious choice would have been some kind of RNN. This would probably be considered less “specialized” or “specific”. But keep in mind that science demands not only pushing the limits of already well-established techniques, but also following underexplored research paths. Considering the reviewer guidelines, https://iclr.cc/Conferences/2019/Reviewer_Guidelines , ICLR seems to be a place that welcomes these efforts.\n\nAlthough we would appreciate a higher rating, we would like to thank you for advocating our paper!\n", "I understand the intent/value of doing controlled experiments. While I said the main weakness was weak experimental results, now I think a bigger issue is the impact of the work on the broader ICLR community. It is indeed a rather specialized contribution about a specific problem and technique, so while I like the paper I'm a bit hesitant to advocate it more strongly. Hence I'll keep my rating. ", "The main contribution of this paper in practice seems to be a way to initialize the Continuous Matrix Space Model so that training actually converges, followed by a slightly different contrastive loss function used to train these models. The paper explores the pure matrix model and a mixed matrix / vector model, showing that both together improve on simpler methods on many benchmark tasks.\n\nMy main concern is that the chained matrix multiplication involved in this method is not substantially simpler than an RNN or LSTM sentence encoding model, and there are no comparisons of training and inference cost between the models proposed in this paper and conceptually simpler RNNs and LSTMs. The FastSent paper, used here as a baseline, does compare against some deep models, but they choose far more complex baselines such as the NMT encoding, which is trained on a very different loss function. Indeed the models proposed here do not seem to outperform fasttext and fastsent despite having fairly similar computational costs.\n\nI think this paper could use a little more justification for when it's appropriate to use the method proposed here versus more straightforward baselines.", "After reading the author response I'm revising my scores upward.", "Dear reviewer,\n\nThank you for your helpful comments. From my understanding, your main critique is:\n\n1. “No comparison in terms of encoding speed with RNNs”\n2. “Our model does not yield as good performance as fastText and fastSent”\n\nLet me respond to these points one by one.\n\n1.\nWe did not do compare against RNNs in terms of encoding speed, because it is already clear from the corpus of related work that embeddings from RNNs are much slower to compute than CBOW embeddings (see for instance Hill et al. (2016), to which you also refer). Since we report that CMOW is approximately as fast as CBOW, we thought that this is clear, i.e., that our method is much more efficient than RNNs.\n\nHowever, it seems that this is not clear from our paper. Thus, we performed some measurements of our own and added the results to the paper (last paragraph of Discussion section). We had already reported the encoding speeds of CBOW and CMOW at test time (61k and 71k sentences per second, respectively). We also added the results of an Elman RNN in order to show that CMOW is substantially faster: In our experiments, the Elman RNN encodes 12k sentences per second, which is 5 times slower than our CMOW encoder. This corresponds almost exactly to the results also observed by Hill et al. at test time.\n\nPlease note, CMOW and CBOW are based on matrix multiplication and addition, respectively, which are associative operations. As such, CMOW and CBOW have substantial parallelization capacities: For a sequence of length n, only log(n) sequential steps are required, and the rest can be computed in parallel. On the other hand, an RNN is not associative and cannot be parallelized in the same manner: It requires n sequential steps.\n\n2.\nFirst, I would like to point out that our Hybrid method DOES outperform the results from fastSent on all supervised datasets that they report results on. On TREC and MRPC, the differences are even as large as 14.1% and 8.3% improvement, respectively.\n\nHowever, the training settings are too different to be considered fair, which we point out in the paper repeatedly: FastSent is trained on a corpus that is three times smaller (1B tokens vs 3B tokens in our case).\nIt is important to understand that we do not consider fastSent as a baseline to directly compare to. We report their results merely to show that our methods (which perform better than fastSent) produce embeddings that are reasonably useful.\nThe same holds for fastText: This was trained on a much larger corpus than our methods (600B tokens as opposed to 3B tokens) and its vocabulary has 2M words as opposed to 30k in our case. Hence, again, this comparison is by no means fair. \nWe perform a controlled study which allows us to identify exactly where the differences in performance come from. Hence, the only direct baseline is the CBOW model from our paper.\n\nHill et al. (2016) : Learning Distributed Representations of Sentences from Unlabelled Data, NAACL 2016", "Dear reviewer,\n\nThanks for your review! To my understanding, your main concerns with our paper are:\n\n1. “The improvements of the Hybrid model over CBOW are not large enough”\n2. “There are other, more powerful models (RNNs) that achieve much better results, so there is not enough justification.”\n\nLet me address these issues one at a time:\n\n1.\nWe conducted a study in the field of learning universal sentence embeddings. Obviously, an embedding that doesn’t have a notion of word order should not be considered \"universal\". The goal of our research is thus to push the limits of what simple word aggregation methods are capable of encoding. Finding some empirical evidence, Henao et al. (2018) hypothesize that the main difference of simple word embedding methods to RNNs may be their inability to capture word order.\n\nWe successfully propose a way to diminish that difference. Our hybrid CBOW-CMOW model is not only able to capture word order information, it scores 8% better on average on the linguistic probing tasks than CBOW. Even if we disregard the benefit from BShift, the improvement is still large (~4%). From the perspective of learning linguistically informed universal sentence embeddings, this is an important result, especially at a conference that is all about learning representations.\n\nIt is true that the results on linguistic probing tasks do not transfer to the same extent to the downstream tasks, achieving an average improvement of \"only\" 1.2%. We have added this in the revised version of the paper. \nWe evaluate our models on the SentEval benchmark. This framework is the de facto standard for evaluating sentence embeddings, and thus we should evaluate our models this way as well. Most tasks in SentEval depend heavily on word content memorization (Conneau et al., 2018). Thus, the selection of downstream tasks rather disfavors our model, since it improves in every aspect but Word Content memorization.\nRecently, more doubt has been cast repeatedly whether the selection of tasks in SentEval is sufficient to test the generality of sentence embeddings (“Anonymous ICRL Submission”, 2018), especially their compositionality (Dasgupta et al., 2018).\n\nIn summary, considering the strong results on linguistic probing tasks, and the nature of the SentEval framework, we believe that the results obtained by our hybrid CBOW-CMOW model are already strong evidence that our method produces more general, robust sentence embeddings.\n\n2.\n\nThe research community in sentence embedding learning has paid a lot of attention to baselines based on word embedding aggregation methods (such as the one presented in this paper) that are conceptually simple, e.g., Henao et al. (2018), Pagliardini et al. (2018), Rueckle et al. (2018), including important work presented at ICLR (Wieting et al (2016), Arora et al. (2017).\nThe reasoning is two-fold: i) Aggregated word embeddings are computationally inexpensive compared to RNNs (see Hill et al. (2016), and the measurements in our work). ii) Pushing the limits of conceptually simple encoders helps to identify the benefit introduced by more sophisticated encoders, which has also been a recurring topic of interest ( Adi et al. (2016), Conneau et al. (2018), Zhu et al. (2018), Anonymous (2018) ).\nOur paper is clearly motivated by reason i), since our method is computationally as inexpensive as CBOW. It is also motivated by reason ii): The conceptual difference between CBOW and CMOW boils down to using matrix multiplication instead of addition, followed by simple adaptations to the training procedure. Yet, these changes substantially improve the model's ability to learn linguistic properties such as word order, which were formerly left up to more sophisticated RNNs.\n\nAdi et al. (2016) : Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks, ICLR 2017\nAnonymous (2018) : No Training Required: Exploring Random Encoders for Sentence Classification. URL: https://openreview.net/forum?id=BkgPajAcY7 , ICLR 2019 Submission\nArora et al. (2017) : A Simple But Tough-to-Beat Baseline for Sentence Embeddings, ICLR 2017\nConneau et al. (2018): What you can cram into a single vector, ACL 2018\nHenao et al. (2018) : Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms, ACL 2018\nHill et al. (2016) : Learning Distributed Representations of Sentences from Unlabelled Data, NAACL 2016\nPagliardini et al. (2018) : Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features, NAACL 2018\nRueckle et al. (2018) : Concatenated Power Mean Word Embeddings as Universal Cross-Lingual Sentence Representations, arXiv:1803.01400\nWieting et al. (2016) : Towards Universal Paraphrastic Sentence Embeddings, ICLR 2016\nZhu et al. (2018) : Exploring Semantic Properties of Sentence Embeddings", "Dear reviewer,\n\nThank you for your comments! To my understanding, your main concerns with our paper are the following:\n\n1. “fastText embeddings are clearly better than our approach.”\n2. “The improvements of the Hybrid model over CBOW are small”.\n\nI would like to address these concerns in the following.\n\n1.\nWe aim at conducting a controlled study, where we have full control over the independent variables. This allows me to precisely measure the effect of our changes/extensions to the CBOW model. Therefore, our baseline is the CBOW we trained ourselves, not fastText or fastSent. We report the scores of fastText and fastSent merely to show that our models produce useful embeddings and are therefore worth studying in the first place. FastText and FastSent are NOT the baselines we compare against.\n\nLet me elaborate why the scores we report for fastText are not comparable to our approach. The scores achieved by fastText are based on the implementation by Mikolov et al. (2018). Like our baseline, fastText is based on the CBOW objective, i.e., predicting the center word from the sum of its context word embeddings. However, their model is trained on a much larger corpus (CommonCrawl, 630B tokens, vs UMBC, 3B tokens), and with a much larger vocabulary (2M words vs. 30,000 in our case). \n\nFurthermore, the authors of fastText employ many tricks to enhance the quality of their models (word subsampling, subword-information, phrase representation, n-gram representations, etc.). For simplicity, I focus on the essential part of our models, i.e., the composition function, in order to conduct a fair and scientifically robust comparison of the performance of CBOW with my novel CMOW and finally the hybrid CBOW-CBOW-model. This makes a direct comparison with fastText very difficult, if not entirely unfair.\n\n2.\nMy paper is concerned with learning universal sentence embeddings with simple word embedding methods. Averaging word embeddings already shows good performance on downstream tasks. However, one cannot really expect to obtain a \"universal\" sentence embedding from an encoder that is word-order agnostic like CBOW. In fact, finding some empirical evidence, Henao et al. (2018) recently hypothesized that word-order sensitivity may be the main difference of simple word aggregation methods to RNNs.\nWe successfully propose a method to diminish this difference.\nOur hybrid CBOW-CMOW-model is not only able to capture word order information like RNNs. It also scores on average 8% better on the linguistic probing tasks than CBOW! Even if we disregard the benefit from BShift, the improvement is still large (~4%). From the perspective of learning linguistically informed universal sentence embeddings, this is an important result.\n\nIt is true that the results on linguistic probing tasks do not transfer to the same extent to the downstream tasks, achieving an average improvement of \"only\" 1.2%. We have added this in the revised version of the paper. \nWe evaluate our models on the SentEval benchmark. This framework is the de facto standard for evaluating sentence embeddings, and thus we should evaluate our models this way as well. Most tasks in SentEval depend heavily on word content memorization (Conneau et al., 2018). Thus, the selection of downstream tasks rather disfavors our model, since it improves in every aspect but Word Content memorization.\nRecently, more doubt has been cast repeatedly whether the selection of tasks in SentEval is sufficient to test the generality of sentence embeddings (“Anonymous ICRL Submission”, 2018), especially their compositionality (Dasgupta et al., 2018).\n\nIn summary, considering the strong results on linguistic probing tasks, and the nature of the SentEval framework, we believe that the results obtained by our hybrid CBOW-CMOW model are already strong evidence that our method produces more general, robust sentence embeddings.\n\n“Anonymous ICRL Submission”(2018): No Training Required: Exploring Random Encoders for Sentence Classification. URL: https://openreview.net/forum?id=BkgPajAcY7\nConneau et al. (2018): What you can cram into a single vector, ACL 2018\nDasgupta et al. (2018): Evaluating Compositionality in Sentence Embeddings, arXiv:1802.04302\nHenao et al. (2018) : Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms, ACL 2018\nMikolov et al. (2018): Advances in Pre-Training Distributed Word Representations, LREC 2018\n", "\nThe authors propose CMOW, an extension of the CBOW model that allows the model to capture word order. Instead of each word being represented as a vector, words are represented by matrices. They extend the CBOW objective to take into account word order by replacing the averaging of vectors to create the context with matrix multiplication (a non-commutative operation). This is the first time this model has been applied in a large scale unsupervised setting. They are able to do this using their objective and an initialization strategy where the matrix embeddings are set to the identity matrix with some Gaussian noise added.\n\nThe results of this paper are its main weakness. I did enjoy reading the paper, and it is nice to see some results using matrices as embeddings and matrix multiplication as a compositional function. They include a nice analysis of how word order is captured by these CMOW embeddings while CBOW embeddings capture the word content, but it doesn't seem to make much of a difference on the downstream tasks where CBOW is better than CMOW and close to the performance of the hybrid combination of CBOW and CMOW.\n\nI think it's clear that their model is able to capture word information to some extent, but other models (RNNs etc.) can do this as well, that admittedly are more expensive, but also have better performance on downstream tasks. I think a stronger motivation for their method besides an analysis of some phenomena it captures and a slight improvement on some downstream tasks when combined with CBOW is needed though for acceptance. Could it be used in other settings besides these downstream transfer tasks?\n\nPROS:\n- introduced an efficient and stable approach for training CMSM models\n- Show that their model CMOW is able to capture word order information\n- Show that CMOW compliments CBOW and a hybrid model leads to improved results on downstream tasks. \n\nCONS\n- The results on the hybrid model are only slightly better than CBOW. CMOW alone is mostly worse than CBOW.", "The paper presents new training schemes and experiments for a matrix-multiplicative variant of CBOW. This variant is called a CMSM (Yessenalina and Cardie, 2011; Asaadi and Rudolph, 2017) which swaps the bag of vectors to a product of square matrices for encoding context to incorporate word ordering. It seems this model has not been trained successfully before (at least with a simple approach) due to the vanishing gradient problem.\n\nThe paper's main contributions are an initialization scheme for context matrices (to I + [N(0,0.1)]) to counter the vanishing gradient problem and a modification of the CBOW objective so that the target word is drawn uniformly at random from the context window (rather than the center word). Both are shown to improve the quality of learned representations when evaluated as sentence embeddings. Concatenating CBOW and CMSM architectures is additionally helpful. \n\nI was not aware of the matrix-multiplicative variant of CBOW previously so it's possible that I don't have the expertise to judge the novelty of the approach. But the idea is certainly sensible and the proposed strategies seem to work. The main downside is that for all this work the improvements seem a little weak. The averaged fastText embeddings are clearly superior across the board, though as the authors say it's probably unfair to compare based on different training settings. But this doesn't hurt the simplicity and effectiveness of the proposed method when compared against CBOW baselines. " ]
[ -1, -1, 6, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "H1e3m8gvRm", "rkxCqDVshm", "iclr_2019_H1MgjoR9tQ", "SJe4Q3-x07", "Byg69SmWhQ", "Bkgah1-i37", "rkxCqDVshm", "iclr_2019_H1MgjoR9tQ", "iclr_2019_H1MgjoR9tQ" ]
iclr_2019_H1eSS3CcKX
Stochastic Optimization of Sorting Networks via Continuous Relaxations
Sorting input objects is an important step in many machine learning pipelines. However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization. In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax. This relaxation permits straight-through optimization of any computational graph involve a sorting operation. Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations. We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm
accepted-poster-papers
This paper proposes a general-purpose continuous relaxation of the output of the sorting operator. This enables end-to-end training to enable more efficient stochastic optimization over the combinatorially large space of permutations. In the submitted versions, two of the reviewers had difficulty in understanding the writing. After the rebuttal and the revised version, one of the reviewers is satisfied. I personally went through the paper and found that it could be tricky to read certain parts of the paper. For example, I am personally very familiar with the Placket-Luce model but the writing in Section 2.1 does not do a good job in explaining the model (particularly Eq 1 is not very easy to read, same with Eq. 3 for the key identity used in the paper). I encourage authors to improve writing and make it a bit more intuitive to read. Overall, this is a good paper and I recommend to accept it.
train
[ "BkgtftDpJN", "rkgvVs-q27", "SJgx2mx2p7", "SJefvGxnaX", "SyeSE1x2aX", "r1xmEny2TQ", "SyxXUGsc3Q", "Byeq6TEfn7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "reviewed rebuttal; still support strong accept", "After responses: I now understand the paper, and I believe it is a good contribution. \n\n================================================\n\nAt a high level, the paper considers how to sort a number of items without explicitly necessarily learning their actual meanings or values. Permutations are discrete combinatorial objects, so the paper proposes a method to perform the optimization via a continuous relaxation. \n\nThis is an important problem to sort items, arising in a variety of applications, particularly when the direct sorting can be more efficient than the two step approach of computing the values and then sorting.\n\nI like both the theoretical parts and the experimental results. In the context of ICLR, the specific theoretical modules comprise some cute results (Theorem 4; use of past works in Lemma 2 and Proposition 5). possibly of independent interest. The connections to the (Gumbel distribution <--> Plackett Luce) results are also nicely used. This Gumbel<-->PL result is well known in the social choice community but perhaps not so much in the ML community, and it is always nice to see more connections drawn between techniques in different communities. The empirical evaluations show quite good results.\n\nHowever, I had a hard time parsing the paper. The paper is written in a manner that may be accessible to readers who are familiar with this (or similar) line of research, but for someone like me who is not, I found it quite hard to understand the arguments (or lack of them) made in the paper connecting various modules. Here are some examples:\n\n- Section 6.1 states \"Each sequence contains n images, and each image corresponds to an integer label. Our goal is to learn to predict the permutation that sorts these labels\". One interpretation of this statement suggests that each row of Fig 3a is a sequence, that each sequence contains n=4 images (e.g., 4 images corresponding to each digit in 2960), and the goal is to sort [2960] to [0269]. However, according to the response of authors to my earlier comment, the goal is to sort [2960,1270,9803] to [1270,2960,9803]. \n\n- I did not understand Section 2.2.\n\n- I would appreciate a more detailed background on the concrete goal before going into the techniques of section 3 and 4.\n\n- I am having a hard time in connecting the experiments in Section 6 with the theory described in earlier sections. And this is so even after my clarifying questions to the authors and their responses. For instance, the authors explained that the experiments in Section 6.1 have \\theta as vacuous and that the function f represents the cross-entropy loss between permutation z and the true permutation matrix. Then where is this true permutation matrix captured as an argument of f in (6)? Is the optimisation/gradients in (7) over s or over the CNN parameters?\n", "We thank the reviewers for their helpful comments! In light of these comments, we have revised the paper. Here is a summary of changes:\n- Sections 3, 4: Motivation and background for the content in these sections have been stated more explicitly. Figure 1 has been added to supplement Section 3.\n- The experimental setup in Section 6.1 and illustration in Figure 4 (which was Figure 3 in the previous version) have been revised to lend more clarity.\n- Appendix E has been added to connect the experiments more concretely with the theory. This appendix includes the precise objective functions for each experiment.\n- Few additional experiment analysis results (Figure 8, Table 4, Table 5) have been added.", "Thanks for reviewing our paper and the helpful feedback! We have addressed your questions and comments below. \n\nQ1. Experimental setup for Section 6.1 and Figure 3.\nA1. We can see the source of confusion now, sorry about that! We have edited the description in Section 6.1 to clarify this point and replaced what was previously Figure 3 with a more illustrative Figure 4 and a descriptive caption. The reviewer’s understanding of our last response is correct --- we have a sequence of n large-MNIST images (where each large-MNIST image is a 4 digit number) and the goal is to sort the input sequence. In Figure 4 for example, the task is to sort the input sequence of n=5 images given as [2960, 1270, 9803, 1810, 7346] to [1270, 1810, 2960, 7346, 9803].\n\nQ2. Section 2.2.\nA2. In Section 2.2, we intend to provide background on stochastic computation graphs (SCG). SCGs are a widely used tool for visualizing and contrasting different approaches to stochastic optimization, especially in the context of stochastic optimization with the backpropagation algorithm since the forward and backward passes can be visualized via the topological sorting of operators in the SCG (e.g., Figures 1, 3). Due to the lack of space, we could not include a detailed overview of stochastic computation graphs and pointed the readers to the canonical reference of Schulmann et al., 2015. The key takeaway is stated in the last paragraph of Section 2.2 --- a sort operator is non-differentiable w.r.t. its inputs and including it in SCGs necessitates the need for relaxations. For a more detailed exposition to SCGs, we have included an illustrative example in Figure 6 that grounds the terminology introduced in Section 2.2. \n\nQ3. Concrete goal in section 3 and 4.\nA3. At its core, this work seeks to include general-purpose deterministic nodes corresponding to sort operators (Section 3) and stochastic nodes corresponding to random variables defined over the symmetric group of permutations (Section 4) in computational pipelines (represented via a stochastic computation graph). Following up on the reviewer’s feedback, we have significantly expanded the motivating introductions for Section 3 and 4 to clearly state the goal beforehand and how we intend to achieve it.\n\nQ4. Connecting theory with experiments. Where is this true permutation matrix captured as an argument of f in (6)? Is the optimisation/gradients in (7) over s or over the CNN parameters?\nA4. Following up on the reviewer’s feedback, we have made the following edits in the revised version:\n- Revised Figures 4, 5 (which were Figure 3, 4 in the old version) to clearly indicate the scores “s” for each experiment.\n- Included a new Appendix E which formally states the loss functions optimized by the Sortnet approaches and explains the semantics of each terms for all three experiments.\n\nRegarding the specific follow-up questions with respect to Equation 7 and 8 (which were previously Equation 6 and 7 in the first version of the paper):\n- For the experiments in Section 6.1, the function f would include an additional argument corresponding to the true permutation matrix. We did not explicitly include the ground-truth permutation as an argument to the function f in Equation 7 to maintain the generality since such objectives also arise in unsupervised settings e.g., latent variable modeling where there is no ground-truth label. See Appendix E.1 for the precise loss function.\n- The gradients in Equation 8 are w.r.t. the scores s that parameterize a distribution q. In the experiments, the scores s are given as the output of a CNN and the optimization is over the CNN parameters. Evaluating gradients w.r.t. the CNN parameters is straightforward via the chain rule/backpropagation.\n\nPlease let us know if there is any other detail that needs further clarification!", "Thanks for reviewing our paper and the helpful feedback! We have addressed your questions and comments below. \n\nQ1. Clarity in Sections 3, 4. Connection with experiments.\nA1. Following up on the reviewer’s feedback, we have made the following edits in the revised version:\n- Edited and expanded the introductory paragraphs for Section 3 and Section 4 to ensure a smooth transition.\n- Revised Figures 4, 5 (which were previously Figure 3, 4 in the in the first version of the paper) to clearly indicate the scores “s” for each experiment.\n- Included a new Appendix E which formally states the loss functions optimized by the Sortnet approaches for all three experiments. \n\nFor the specific follow-up questions in the review, we first note that Equation (7) (which was previously Equation (6) in the first version of the paper) is the general style of expressions used in the relevant literature on stochastic optimization, see e.g., Section 3 in Jang et al., 2017. These expressions are succinct, but as the reviewer points out, they need additional clarification when extended to the experiments. We hope Appendix E will help clarify these formally. For completeness, we address the two questions specifically raised by the reviewer here:\n\nIn all our experiments, we are dealing with sequences of n objects x = [x1, x2, …, xn] and trying to sort these objects for an end goal. In Section 6.1, the goal is to output the sorted permutation for a sequence of n largeMNIST images; in 6.2, the goal is to output the median value in the sequence; in 6.3, the goal is to sort a sequence of training points as per their distances to a query point for kNN classification. We now explain the notation in the context of largeMNIST experiments in Section 6.1/6.2 which share the same experimental setup and dataset; the kNN experiments in Section 6.3 follow similarly.\n\n- s=[s1, s2, …, sn] corresponds to a vector of scores, one for each largeMNIST image in the input sequence. Each score si is the output of a CNN which takes as input an image xi. The CNNs across the different largeMNIST images x1, x2, ..., xn share parameters. Note that we directly specify the vector s (and skip x as well as the CNN parameters relating x to s) in Equation (7) for brevity. In Section 4, we derived gradients of the objective w.r.t. s, which can be backpropagated via chain rule to update the CNN parameters in a straightforward manner.\n- q is the Plackett-Luce distribution over permutations z and parameterized by scores s. \n- f is any function (that optionally depends on additional parameters \\theta) that acts over a permutation matrix P_z. In the experiments in Section 6.1, the function f is the element-wise cross-entropy loss between the true permutation matrix that sorts x and P_z. Again for the purpose of generality , we do not explicitly include the ground-truth permutation as an argument to the function f in Equation (7) since such objectives also arise in unsupervised settings, e.g., latent variable modeling where there is no ground-truth label. \n- The parameters \\theta for specifying f as a function of P_z are optional and task-specific. In particular, the cross-entropy loss function f for experiments in Section 6.1 does not needs any additional parameters \\theta. For the experiments in Section 6.2, we cannot compute a loss directly with respect to the permutation matrix P_z since we need to regress a scalar value for the median. Instead, we feed the predicted median image in the input sequence (can be obtained via sorting x as per P_z) to a neural network (with parameters \\theta) to obtain a real-valued, scalar prediction. We then compute f as the the MSE regression loss between the true median value and the value predicted by the parameterized neural network.\n- Lastly, L denotes the expected value of the objective function f w.r.t. the distribution q.\n\nPlease refer to Figures 4, 5 for the computational pipeline and Appendix E for the precise loss functions for each experiment. Let us know if there is any other detail that needs clarification!\n\nQ2. Confusing use of the phrase \"Sorting Networks\" in the title of the paper.\nA2. Thanks for pointing it out! If permitted by the conference rules, we will consider substituting ‘networks’ to ‘operators’ in the title of the final version of the paper.\n\nQ3. Page 2 -- Section 2 PRELIMINARIES -- It seems that sort(s) must be [1,4,2,3].\nA3. We believe the sort(s) expression in the paper is correct. This is because the largest element (=9) is at index 1, second largest element (=5) is at index 3, third largest element (=2) is at index 4 and the smallest element (=1) is at index 2. Hence, sort(s)=[1,3,4,2]^T as indicated in the paper.", "Thanks for reviewing our paper and the helpful feedback! We have addressed your questions below.\n \nQ1. How much of the improvement is attributable to the lower dimension of the parameterization? (e.g. all Sinkhorn varients have N^2 params; this has N params) Is there any reduction in gradient variance due to using fewer gumbel samples?\nA1. Precise quantification of the gains due to lower dimension of the parameterization alone is hard since the relaxation itself is fundamentally different from the Sinkhorn variants. In an attempt to get a handle on these aspects (n^2 vs. n parameters and doubly stochastic vs. unimodal matrices), we analyzed the signal-to-noise (SNR) ratio for the Stochastic Sortnet and Gumbel-Sinkhorn approaches with the same number of Gumbel samples (=5). Here, we define SNR as the ratio of the absolute value of the expected gradient estimates and the standard deviation. For the experiments in Section 6.1, the SNR ratio averaged across all the parameters is shown in Figure 8. We observe a much higher SNR for the proposed approach, in line with the overall gains we see on the underlying task.\n\nQ2. More details needed on the kNN loss (uniform vs inv distance wt? which one?); and the experiment overall: what k got used in the end?\nA2. We used a uniformly weighted kNN loss for both the Sortnet approaches, while noting that it is straightforward to extend our framework to use an inverse distance weighting. Appendix E.3 includes the formal expressions for the loss functions optimized in our framework. Furthermore, we have included new results in Table 5 which show the raw performance of Deterministic and Stochastic Sortnet for all values of k considered. \n\nQ3. The temperature setting is basically a bias-variance tradeoff (see Fig 5). How non-discrete are the permutation-like matrices ultimately used in the experiments? \nA3. That’s a great suggestion! One way to quantify the non-discreteness could be based on the element-wise mean squared difference between the inferred unimodal row stochastic matrix and its projection to a permutation matrix, for the test set of instances. We have included these results for the sorting experiment in Table 4.\n\nPlease let us know if there are any further questions!", "This work builds on a sum(top k) identity to derive a pathwise differentiable sampler of 'unimodal row stochastic' matrices. The Plackett-Luce family has a tractable density (an improvement over previous works) and is (as developed here) efficient to sample. \n\n[OpenReview did not save my draft, so I now attempt to recover it from memory.]\n\nQuestions:\n- How much of the improvement is attributable to the lower dimension of the parameterization? (e.g. all Sinkhorn varients have N^2 params; this has N params) Is there any reduction in gradient variance due to using fewer gumbel samples?\n- More details needed on the kNN loss (uniform vs inv distance wt? which one?); and the experiment overall: what k got used in the end?\n- The temperature setting is basically a bias-variance tradeoff (see Fig 5). How non-discrete are the permutation-like matrices ultimately used in the experiments? While the gradients are unbiased for the relaxed sort operator, they are still biased if our final model is a true sort. Would be nice to quantify this difference, or at least mention it.\n\nQuality:\nGood quality; approach is well-founded and more efficient than extant solutions. Fairly detailed summaries of experiments in appendices (except kNN). Neat way to reduce the parameter count from N^2 to N.\n\nI have not thoroughly evaluated the proofs in appendix.\n\nClarity:\nThe approach is presented well, existing techniques are compared in both prose and as baselines. Appendix provides code for maximal clarity. \n\nOriginality:\nFirst approach I've seen that reduces parameter count for permutation matrices like this. And with tractable density. Very neat and original approach.\n\nSignificance:\nMore scalable than existing approaches (e.g: only need N gumbel samples instead of N^2), yields better results.\n\nI look forward to seeing this integrated into future work, as envisioned (e.g. beam search)", "In many machine learning applications, sorting is an important step such as ranking. However, the sorting operator is not differentiable with respect to its inputs. The main idea of the paper is to introduce a continuous relaxation of the sorting operator in order to construct an end-to-end gradient-based optimization. This relaxation is introduced as \\hat{P}_{sort(s)} (see Equation 4). The paper also introduces a stochastic extension of its method \nusing Placket-Luce distributions and Monte Carlo. Finally, the introduced deterministic and stochastic methods are evaluated experimentally in 3 different applications: 1. sorting handwritten numbers, 2. Quantile regression, and 3. End-to-end differentiable k-Nearest Neighbors.\n\nThe introduction of the differentiable approximation of the sorting operator is interesting and seems novel. However, the paper is not well-written and it is hard to follow the paper especially form Section 4 and on. It is not clear how the theoretical results in Section 3 and 4 are used for the experiments in Section 6. For instance:\n** In page 4, what is \"s\" in the machine learning application?\n** In page 4, in Equation 6, what are theta, s, L and f exactly in our machine learning applications?\n\nRemark: \n** The phrase \"Sorting Networks\" in the title of the paper is confusing. This term typically refers to a network of comparators applied to a set of N wires (See e.g. [1])\n** Page 2 -- Section 2 PRELIMINARIES -- It seems that sort(s) must be [1,4,2,3].\n\n[1] Ajtai M, Komlós J, Szemerédi E. An 0 (n log n) sorting network. InProceedings of the fifteenth annual ACM symposium on Theory of computing 1983 Dec 1 (pp. 1-9). ACM\n" ]
[ -1, 7, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "r1xmEny2TQ", "iclr_2019_H1eSS3CcKX", "iclr_2019_H1eSS3CcKX", "rkgvVs-q27", "Byeq6TEfn7", "SyxXUGsc3Q", "iclr_2019_H1eSS3CcKX", "iclr_2019_H1eSS3CcKX" ]
iclr_2019_H1ebTsActm
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality
Deep learning has shown high performances in various types of tasks from visual recognition to natural language processing, which indicates superior flexibility and adaptivity of deep learning. To understand this phenomenon theoretically, we develop a new approximation and estimation error analysis of deep learning with the ReLU activation for functions in a Besov space and its variant with mixed smoothness. The Besov space is a considerably general function space including the Holder space and Sobolev space, and especially can capture spatial inhomogeneity of smoothness. Through the analysis in the Besov space, it is shown that deep learning can achieve the minimax optimal rate and outperform any non-adaptive (linear) estimator such as kernel ridge regression, which shows that deep learning has higher adaptivity to the spatial inhomogeneity of the target function than other estimators such as linear ones. In addition to this, it is shown that deep learning can avoid the curse of dimensionality if the target function is in a mixed smooth Besov space. We also show that the dependency of the convergence rate on the dimensionality is tight due to its minimax optimality. These results support high adaptivity of deep learning and its superior ability as a feature extractor.
accepted-poster-papers
The paper extends the results in Yarotsky (2017) from Sobolev spaces to Besov spaces, stating that once the target function lies in certain Besov spaces, there exists some deep neural networks with ReLU activation that approximate the target in the minimax optimal rates. Such adaptive networks can be found by empirical risk minimization, which however is not yet known to be found by SGDs etc. This gap is the key weakness of applying approximation theory to the study of constructive deep neural networks of certain approximation spaces, which lacks algorithmic guarantees. The gap is hoped to be filled in future studies. Despite the incompleteness of approximation theory, this paper is still a good solid work. Based on fact that the majority of reviewers suggest accept (6,8,6), with some concerns on the clarity, the paper is proposed as probable accept.
train
[ "SJl71edYT7", "Bylje1dKaQ", "SJxcoCDYpX", "ryeP_CPYTm", "rJg7UAwKp7", "SygSECvtaX", "rJgL04k-aQ", "BJeVd9HTnQ", "S1eqfHtq3m", "r1gX-pN9hQ" ]
[ "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your careful reading. We have uploaded a revised version.\nThe main difference from the original one is as follows:\n\n1. Some additional text explanations are added for the definition of m-Besov space.\n2. We added a few remarks for the approximation error bound in Proposition 1 and Theorem 1.\n3. We have fixed some grammatical errors and typos.\n\nSincerely yours,\nAuthors.", "Thank you for your instructive question.\nFirst, in our analysis, the dimensionality d is a fixed constant and is not allowed to increase to infinity as the sample size n goes up. Thus, the curse of sample size does not occur for fixed d. \n\nSecond, behind the order notation, there is a term depending on d. Actually, the log(n)^d term is originally comes from (log(n)/d)^d term (more precisely, it comes from D_{K,d} defined in Sec.3.2 where K will be O(log(n))). Thus, this term slowly increases (O(n^\\epsilon) for a small constant \\epsilon) under an assumption that d <= C log(n) for a sufficiently small C. On the other hand, for the convergence rate n^{-2s/(2s + d)} on the Besov space, d is not allowed to be log(n)-order. Actually, as long as d is log(n)-order, n^{-2s/(2s + d)} does not converges to 0. This contrasts the difference of the two settings, Besov and m-Besov settings.\n\nFinally, we also would like to remark that if d is O(log(n)), then the overall convergence rate will be changed. It will depend on the coefficient hidden in the order notation of d = O(log(n)). Showing the precise bound under this condition is out of paper's scope. Thus, we would like to leave that for the future work.\n\n", "We would appreciate your insightful comments.\n\n(1)\nQ: The most critical is this: piecewise polynomials are members of the Besov spaces of interest, and ReLU networks produce piecewise linear functions. How can piecewise linear approximations of piecewise polynomial functions lead to minimax optimal rates? \n\nA: Thank you for raising a concern about an important point. As you concern, a piecewise linear approximation does \"not\" achieve the minimax optimal rate. This is because we need large number of linear pieces to approximate smooth functions. Hence, as long as we use a shallow network, we can not achieve the minimax rate with ReLU activation. However, the situation is very different if we use a deep neural network. Actually, the number of pieces \"exponentially\" grows up as the depth increases, and this property enables us to approximate the higher order B-spline bases by ReLU-DNN with a log(n)-order size. This is the main reason why we can achieve the minimax rate (up to log(n)-term) by the ReLU-DNN.\n\n(2)\nQ: A second question is ... how should I interpret claims like \"any linear/non-linear approximator\nwith fixed N -bases does not achieve the approximation error ... in some parameter settings such as 0 < p < 2 < r \"?\nWavelets provide a fixed N-basis and achieve optimal rates for Besov spaces. \n\nA: \nIndeed, the Donoho-Johnstone's wavelet shrinkage estimator achieves the minimax optimal rate in terms of the estimation error. Here, we would like to emphasize that the approximation error analysis and estimation error analysis are separated (although they are closely connected): the approximation error is evaluated by the number of bases and the estimation error is evaluated by the sample size. As for the approximation error analysis, the shrinkage estimator should prepare a huge number of bases beforehand which could be much larger than the number of non-zero parameters selected by the shrinkage estimator. In that sense, there is no contradiction about the approximation error analysis because the Kolmogorov width states only about the total number of bases which should be prepared beforehand. On the other hand, a remarkable property of the shrinkage estimator is that it appropriately selects a small number of subsets of the bases in an adaptive way (in this sense, the wavelet shrinkage is an adaptive method). Consequently, it achieves the minimax optimal estimation error rate although the whole number of parameters is still large compared with the selected non-zero components. This adaptivity highly relies on the non-linearity of the soft thresholding operator.\nOn the other hand, the deep neural network directly constructs the necessary number of bases for each function in the Besov space. This contrasts the shrinkage estimator; that is, the shrinkage estimator prepare a large number of bases first and then selects a small subset of them (which leads to the minimax optimal estimation error), on the other hand, deep learning directly generates the required bases.\nThis difference would be analogous to the relation between a sparse estimator and a low rank matrix estimator.\n\nThe difference between the linear estimator and the nonlinear estimator occurs when p < 2 = r (Proposition 3). Interestingly, this is the regime where the approximation errors are different between adaptive methods and non-adaptive ones (see Eq.(5)). In this setting (p < 2 = r), functions is the Besov space can has high spatially inhomogeneous smoothness which is hard to be captured by a linear method.\n\n(3)\nQ: A minor note: some of the references are strange\n\nA:\nThank you for your informative suggestion. The citation [Gine & Nickl, 2015] for the minimax optimal rate on a Besov spaces is a comprehensive text book that was not intended to be the original paper but just a nice reference to overview the literature. The reference [Adams & Fournier, 2003] for the interpolation space is also referred as a text book describing overview of the literature and several related topics in details. But, as you pointed out, it is more appropriate to cite original papers. We have cited [Kerkyacharian & Picard, 1992; Donoho et al., 1996] for the minimax estimation rate in a Besov pace, and cited [DeVore, 1998] for the interpolation space characterization of a Besov space. \n\n\nG. Kerkyacharian and D. Picard. Density estimation in besov spaces. Statistics & Probability\nLetters, 13:15--24, 1992.\n\nD. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and Dominique Picard. Density esti-\nmation by wavelet thresholding. The Annals of Statistics, 24(2):508--539, 1996.\n\nD. L Donoho, I. M. Johnstone, G. Kerkyacharian and Dominique Picard. Minimax estimation via wavelet shrinkage. The Annals of Statistics, 26(3):879--921, 1998.\n\nR. DeVore. Nonlinear approximation. Acta numerica, 7:51--150, 1998.", "We would appreciate your detailed feedback on our manuscript.\n\n(1)\nQ:\nSince there is no discussion of any learning procedure involved, I would suggest that the use of the phrase \"deep learning\" throughout the paper be revised.\n\nA:\nThank you for your suggestion. As you pointed out, using another terminology such as a \"regularized empirical risk minimizer\" might be more specific instead of \"deep learning.\" However, the purpose of this paper is to show the superiority and limitation of deep neural network approaches by investigating its best achievable performance. Hence, we would prefer the terminology \"deep learning\" to indicate the regularized empirical risk minimization over the deep neural network model. We have added a footnote in page 2 which clarifies what kind of estimator is considered throughout the paper.\n\n(2)\nQ:\nVery dense; could benefit from considerably more exposition.\nThe paper is dense and somewhat inaccessible. Presentation could be improved by adding more exposition and comparisons with existing results.\n\nA:\nDue to the space limitations, we should have omitted some detailed explanations, though we did our best to include necessary amount of expositions and comparisons. But, as you pointed out, more explanations would help readability. Hence, we have added some text explanations in page 4 for the definition of the m-Besov space. We also added some explanations for the meaning of the approximation error rate and its relation to depth, width and sparsity after Proposition 1 and Theorem 1.\n\n(3)\nQ: The generalization bounds in Section 4 are given for an ideal estimator which is probably impossible to compute.\n\nA:\nWe believe that it is informative to investigate how well deep learning can potentially achieve even in the ideal case (of course, without any cheating) because we cannot say anything about the limitation of deep learning approaches without this kind of investigation. Actually, we think this type of analysis is becoming popular in the statistics community. Moreover, recent intensive studies about convergence properties of SGD for deep learning implies that it is not so much vacuous to assume we can achieve the global optimal solution with a good generalization guarantee. In addition, we can also involve the optimization error in our estimation error bound, but we have omitted that for better readability.", "(3)\nQ: Minor: - Defn of Holder class: you can make this hold for integral beta if you define m to be the smallest integer less than beta (e.g. beta=7, m=6). Imo, this is standard in most texts I have seen. \n\nA:\nThank you for your detailed comment. Yes, it is one of very popular definitions of the Holder class for integer beta. On the other hand, one could also define it just by max_{|\\alpha|<=beta}||D^\\alpha f||_\\infty where the last term is not involved (in other words, beta=m). Moreover, it could be defined by B_{\\infty,\\infty}^m. Unfortunately, for an integer \\beta, these spaces do \"not\" coincide with each other. To avoid this kind of confusions, we decided to define the Holder space only for non-integer beta.\n\n(4)\nQ: - The authors claim that the approximation error does not depend on the dimensionality needs clarification, since N clearly depends on the dimension. ... what the authors meant was that the exponential dependence on d has now been eliminated. \n\nA:\nYes, the convergence rate is dependent on d. What we have meant by \"exponential dependence on d is avoided\" is that the dimensionality d is not coming directly to the polynomial order of n, that is, the exponent of the term n^{-2s/(2s + 1)}. Indeed, d also comes into the exponent of the log(n)-term as log(n)^d. However, comparing the polynomial order and poly-log order, poly-log order is milder. Then, we said \"the curse of dimensionality is eased.\"\n\n(5)\nQ: Other - On page 4, what does the curly arrow notation mean? \n\nA:\nIt means a continuous embedding. Namely, if X \\hookrightarrow Y for two norm spaces X and Y, then X can be continuously embedded in Y (i.e., X is a subset of Y and there exists a constant C such that |x|_Y <= C |x|_X for x \\in X). We have added the definition in the revised version.\n\n(6)\nQ:- Given the technical nature of the paper, the authors have done a good job with the presentation. However, in some places the discussion is very equation driven. For e.g. in the 2nd half of page 4, it might help to explain many of the quantities presented in plain words. \n\nA:\nWe have added some text explanations in page 4. Due to space limitation, we could not give full expositions. But, we also added some explanations for the meaning of the approximation error rate and its relation to the depth, width and sparsity after Proposition 1 and Theorem 1.\n", "Thank you for your suggestive comments. We have revised our paper according to your comments, though unfortunately some of them could not be addressed due to the lack of space.\n\n(1)\nQ: My main criticism is that the total rate of convergence (estimation error + approximation error) has not been presented in a transparent way. ... how the parameters W, L etc. should be chosen so as to balance them. \n \nA:\nWe have presented the approximation error bound in a concrete way to minimize misunderstandings, which would have made the presentation a bit opaque instead. The error bound O(N^{-s/d}) is rather typical notation in the approximation theory. Since we think the parameters s,p,q,d as constants, the approximation error in Proposition 1 can be written as R_r = O(N^{-s/d}) under the conditions L=O(log(N)) (depth), W = O(N) (width) and S = O(N log(N)) (sparsity) for an integer N. Roughly speaking, N corresponds the number of parameters, S, upto log(N) order. Thus the convergence rate is written as a function of the number of parameters under an appropriate choice of depth L and width W. We can see that the convergence rate of the error is completely controlled by the smoothness s and the dimensionality d against the number of parameters S. On the other hand, as for the m-Besov case, the approximation error is evaluated as O(N^{-s} \\log^{s(d-1)}(N)) for L=O(log(N)), W = O(N) and S = O(N log(N)) for an integer N. Here we again observe that the convergence rate is controlled by the smoothness s and the dimensionality d. We think these representations are more transparent. We have added sentences to explain these relations just after Proposition 1 and Theorem 1.\n\n(2)\nQ: While the mixed Besov spaces enables better bounds, the condition appears quite strong. In fact, the lower bound is better than for traditional Holder/Sobolev classes. Can you please comment on how th m-Besov space compares to Holder/Sobolev classes? Also, can you similarly define mixed Holder/Sobolev spaces where traditional linear smoothers might achieve minimax optimal results? \n\nA:\nYes, the condition for the mixed Besov space is much stronger than the ordinary Besov space. Yes, we can define mixed smooth Holder/Sobolev space. They are defined just by setting p=q=infty or p=q=2. Hence, the mixed smooth Besov space is much wider class of mixed smooth Holder/Sobolev space. Roughly speaking, the mixed smooth Besov space consists of functions having form g(f_1(x_1),...,f_d(x_d)) where each f_i(x_i) is a function in a Besov space on [0,1] and g:R^d \\to R is a sufficiently smooth function. Then, we can see that the m-Besov space includes an additive model \\sum_{i=1}^d f_i(x_i) and a tensor model \\sum_r \\prod_{i=1}^d f_{r,i}(x_i) as special cases. \nWe can also define an intermediate function class between the ordinary Besov space and the m-Besov space by taking a tensor product of B_{p,q}^s([0,1]^{d_1}), ..., B_{p,q}^s([0,1]^{d_K}) where d_1 + d_2 + ... d_K = d (if each d_i = 1, then it is reduced to the m-Besov space). We can also show a convergence rate which is between those of the m-Besov space and the Besov space, but we don't pursue this direction due to space limitation. ", "I am looking into the estimation error bound in Table 2 on Page 3.\n\nWe assume that \\beta = 3, u = 0.1, and the sample size is large. Let's say n~exp(d).\n\nThen we can reduce the bound to O(exp(-6d/7) * d^{0.88 d}).\n\nThe bound will blow up for large d.\n\nCould you please clarify your results?\n", "Summary:\n========\nThe paper presents rates of convergence for estimating nonparametric functions in Besov\nspaces using deep NNs with ReLu activations. The authors show that deep Relu networks,\nunlike linear smoothers, can achieve minimax optimality. Moreover, they show that in a\nrestricted class of functions called mixed Besov spaces, there is significantly milder\ndependence on dimensionality. Even more interestingly, the Relu network is able to\nadapt to the smoothness of the problem.\n\nWhile I am not too well versed on the background material, my educated guess is that the\nresults are interesting and relevant, and that the analysis is technically sound.\n\n\n\nDetailed Comments:\n==================\n\n\nMy main criticism is that the total rate of convergence (estimation error + approximation\nerror) has not been presented in a transparent way. The estimation error takes the form\nof many similar results in nonparametric statistics, but the approximation error is\ngiven in terms of the parameters of the network, which depends opaquely on the dimension\nand other smoothness parameters. It is not clear which of these terms dominate, and\nconsequently, how the parameters W, L etc. should be chosen so as to balance them.\n\n\nWhile the mixed Besov spaces enables better bounds, the condition appears quite strong.\nIn fact, the lower bound is better than for traditional Holder/Sobolev classes. Can you\nplease comment on how th m-Besov space compares to Holder/Sobolev classes? Also, can\nyou similiarly define mixed Holder/Sobolev spaces where traditional linear smoothers\nmight achieve minimax optimal results?\n\n\nMinor:\n- Defn of Holder class: you can make this hold for integral beta if you define m to be\nthe smallest integer less than beta (e.g. beta=7, m=6). Imo, this is standard in most\ntexts I have seen.\n- The authors claim that the approximation error does not depend on the dimensionality\n needs clarification, since N clearly depends on the dimension. If I understand\n correctly, the approximation error is in fact becoming smaller with d for m-Besov\n spaces (since N is increasing with d), and what the authors meant was that the\n exponential dependnence on d has now been eliminated. Is this correct?\n\nOther\n- On page 4, what does the curly arrow notation mean?\n- Given the technical nature of the paper, the authors have done a good job with the\n presentation. However, in some places the discussion is very equation driven. For e.g.\n in the 2nd half of page 4, it might help to explain many of the quantities presented in\n plain words.\n\n\n\nConfidence: I am reasonably familiar with the nonparametric regression literature, but\nnot very versed on the deep learning theory literature. I did not read the proofs in\ndetail.\n", "This paper makes two contributions:\n* First, the authors show that function approximation over Besov spaces for the family of deep ReLU networks of a given architecture provide better approximation rates than linear models with the same number of parameters.\n* Second, for this family and this function class they show minimax optimal sample complexity rates for generalization error incurred by optimizing the empirical squared error loss.\n\nClarity: Very dense; could benefit from considerably more exposition.\n\nOriginality: afaik original. Techniques seem to be inspired by a recent paper by Montanelli and Du (2017).\n\nSignificance: unclear.\n\nPros and cons: \nThis is a theory paper that focuses solely on approximation properties of deep networks. Since there is no discussion of any learning procedure involved, I would suggest that the use of the phrase \"deep learning\" throughout the paper be revised.\n\nThe paper is dense and somewhat inaccessible. Presentation could be improved by adding more exposition and comparisons with existing results.\n\nThe generalization bounds in Section 4 are given for an ideal estimator which is probably impossible to compute.", "This paper describes approximation and estimation error bounds for functions in Besov spaces using estimators corresponding to deep ReLU networks. The general idea of connecting network parameters such as depth, width, and sparsity to classical function spaces is interesting and could lead to novel insights into how and why these networks work and under what settings. The authors carefully define Besov spaces and related literature, and overall the paper is clearly written. \n\nDespite these strengths, I'm left with several questions about the results. The most critical is this: piecewise polynomials are members of the Besov spaces of interest, and ReLU networks produce piecewise linear functions. How can piecewise linear approximations of piecewise polynomial functions lead to minimax optimal rates? The authors' analysis is based on cardinal B-spline approximations, which generally makes sense, but it seems like you would need more terms in a superposition of B-splines of order 2 (piecewise linear) than higher orders to approximate a piecewise polynomial to within a given accuracy. The larger number of terms should lead to worse estimation errors, which is contrary to the main result of the paper. I don't see how to reconcile these ideas. \n\nA second question is about the context of some broad claims, such as that the rates achieved by neural networks cannot be attained by any linear or nonadaptive method. Regarding linear methods, I agree with the author, but I feel like this aspect is given undue emphasis. The key paper cited for rates for linear methods is the Donoho and Johnstone Wavelet Shrinkage paper, in which they clearly show that nonlinear, nonadaptive wavelet shrinkage estimators do indeed achieve minimax rates (within a log factor) for Besov spaces. Given this, how should I interpret claims like \"any linear/non-linear approximator\nwith fixed N -bases does not achieve the approximation error ... in some parameter settings such as 0 < p < 2 < r \"?\nWavelets provide a fixed N-basis and achieve optimal rates for Besov spaces. Is the constraint on p and r a setting in which wavelet optimality breaks down? If not, then I don't think the claim is correct. If so, then it would be helpful to understand how relevant this regime for p and r is to practical settings (as opposed to being an edge case). \n\nThe work on mixed Besov spaces (e.g. tensor product space of 1-d Besov spaces) is a fine result but not surprising.\n\nA minor note: some of the references are strange, like citing a 2015 paper for minimax rates for Besov spaces that have been known for far longer or a 2003 paper that describes interpolation spaces that were beautifully described in DeVore '98. It would be appropriate to cite these earlier sources. " ]
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 2, 2 ]
[ "iclr_2019_H1ebTsActm", "rJgL04k-aQ", "r1gX-pN9hQ", "S1eqfHtq3m", "BJeVd9HTnQ", "BJeVd9HTnQ", "iclr_2019_H1ebTsActm", "iclr_2019_H1ebTsActm", "iclr_2019_H1ebTsActm", "iclr_2019_H1ebTsActm" ]
iclr_2019_H1edIiA9KQ
Generating Multiple Objects at Spatially Distinct Locations
Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions. Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions. However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve. This is especially true for images that should contain multiple distinct objects at different spatial locations. We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator. Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed. The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes. The global pathway focuses on the image background and the general image layout. We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set. Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations. We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background.
accepted-poster-papers
The submission proposes a model to generate images where one can control the fine-grained locations of objects. This is achieved by adding an "object pathway" to the GAN architecture. Experiments on a number of baselines are performed, including a number of reviewer-suggested metrics that were added post-rebuttal. The method needs bounding boxes of the objects to be placed (and labels). The proposed method is simple and likely novel and I like the evaluating done with Yolov3 to get a sense of the object detection performance on the generated images. I find the results (qual & quant) and write-up compelling and I think that the method will be of practical relevance, especially in creative applications. Because of this, I recommend acceptance.
val
[ "SJgxVtZn3X", "HJlFhAnO3X", "HyeGwusKC7", "HJeYpRthTm", "SygkaFlo37", "SJg7xYB16Q", "B1l3puS1am", "B1ejRvrkam", "B1lM28SJTm", "S1laSvBJTm", "Byx7gDB1aQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper proposed a model to generate location-controllable images built upon GANs. The experiments are conducted on several datasets. Although this problem seems interesting, here are several concerns I have:\n\n1.Novelty: the overall framework is still conditional GAN framework. The multiple -generators-discriminators structure has been used in many other works (see the references). The global-local design is not new. Finally, compared with Reed et al. [2016], the novelty is limit. \n\n2.Motivation: I still can not tell why the proposed method is better than ones with scene layout. For me, the cost of collecting annotated data is almost the same. \n\n3. The experimental results are week. For such a task, it is difficult to find a good metric. Thus the qualitative comparison is important. I think the author should follow standard rule to do some design for user study instead of cherry pick some examples. Besides, it should include more baselines instead of StackGAN. \n\nReferences: \na. Xi et al. Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond\nb. Yixiao et al. FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification\n\nRevision:\nThanks for the work of the authors' and all the reviewers. I spent sometime reading the rebuttal as well as the revised paper. It addressed most of my concern. I would like to change my rating from 5 to 6. ", "The authors are proposing a method for allowing the generation of multiple objects in generated images given simple supervision such as bounding boxes and their associated labels. They control the spatial location of generated objects by the mean of an object pathway added to the architecture of both Generator and Discriminator within a GAN framework. They show generated results on Multi-MNIST, CLEVR with discussions of their model's abilities and properties. they also provide quantitative results on MSCOCO (IS and FID) using StackGAN and AttGAN models with the object pathway modifications and show some improvements compared to the original models. However it must be noted (as commented by the authors) that these models are using image captions only and do not have explicit supervision of bounding box and object labels.\n\nThis paper proposes a simple approach to generating requested objects in GAN-based image generation task, The method is supervised and requests (in its current form) the Bounding Boxes and Labels of the objects to integrate into the image generation. This task of controlling the nature (identity) and size of objects to integrate in a generated image is an important one and is significant to the GAN-base image generation community. In terms of originality, the approach is a nice simple architecture that takes care of the spatial location problem head-on. It seems like an obvious step but this does not take away from the merits of the proposed method.\n\nThe generator Global path is given a noise component. From the text, it does not seem that the Object path is given a noise component. Do you generate always the same object given the same label and Bounding Box then? Why not integrate some noise in this pipeline too?\n\n\nMulti-MNIST:\nThe authors present results on Multi-MNIST 50K customed data to present the ability of the model to accurately put request images in the correct bounding box (BB) and do some ablation study. This is an interesting test as it shows that indeed the method proposed generates digits where it is expected to. Could you provide the ground truth labels for each/some image/s? For the failure cases it is often not clear what digit is what. For the Row E and F, 1s could be 7s and vice versa. Since it is a qualitative study, it would be nice to have the Ground Truth (GT) (which you provide to G at for generation). For the failure case of Row D (right) an interesting results would have been to have example of a digit bounding box from top to bottom with few pixel vertical shift to visualize when the model starts to mess-up the generation. This seems to point that your model (exposed to the location from BB for the object paths) is sensitive to what locations it has seen in training. How would you make the object path more robust to unseen location (overall you need to design an object of a given size, then locate it in your empty canvas prior to the CNN for generation)?\n\nCLEVR:\nThe images resolution make it hard to really see the shape of the images (here too, the GT would be great). The bounding boxes make the images even harder to parse. I know the colors change but \"We can confirm that the model can control both location and object's shape\". For the location, it is true, for the shape is hard to completely tell at this resolution without GT. \n\nMS-COCO:\n Just a comment in passing on the fact that resizing images from COCO to 256x256 will inherently distort quite a bit of images, the median size (for each dim) for COCO is 640x480, if I am not mistaken. Most, if not all images in COCO are not 1.0 size ratio.\nThe quantitative results on COCO seem to confirm that the proposed method is generating \"better\" images according to IS and FID. This is a good thing, however the technique is strongly supervised (Bounding Box and Object Labels, caption compared to solely captions for StackGAN and AttGAN) so this result should be expected and really put into perspective as your are not comparing models w/ the same supervision (which you mention in the Discussion).\n\nDiscussion: \nI appreciate that the authors addressed the limitations of their approach in this section. The overlapping BBs seems to be an interesting challenge. Did you try to normalize the embeddings in overlapping area? A simple sum does not seem to be a good solution. In Figure 7 w/ overlapping zebras, the generation seems completely lost. \n\nIn terms of clarity, the paper is well-written but would benefit *greatly* from using variables names when discussing 'layout embedding', 'generated local labels', etc. Variable names and equations, while not necessary, can go a long way to clearly express a model's internal blocks (most of the papers you referenced are using this approach). The paper employs none of this commonly used standard and suffers from it. I myself had to write down on the margin the different variables used at each step described in text to have an understanding of what was done (with help of Figure 1). You should reference Figure 1 in the Introduction, as you cover your approach there and the Figure is useful to grasp your contributions.\n\nAnother comment concerning clarity is, while it is fine to rely on previously published papers for description of our own work, you should not assume full knowledge from the reader and your paper should stand on its own without having the reader lookup for several papers to have an understanding of your training procedures. If one uses GAN training, it should be expected to cover/formulate quickly the min max game and the various losses you are trying to minimize. I am afraid that \"using common GAN procedures\" is not enough. When describing your experimental setup, pointing to another paper as \"hyperparameters remain the same as in the original training procedure\" should not be a substitution for covering it too, even if lightly in the Appendix. For instance: in the Appendix, it is mentioned that training was stopped at 20 epochs for Multi-MNIST, 40 for CLEVR... How did you decide on the epoch (early stopping, stopped before instabillity of GAN training, etc.) Did you use SGD? ADAM? Did you adjust the learning rate, which schedule? etc. for your GAN training. This information in the Appendix would make the paper overall stronger. \n\nLast comment: In terms of generation multiple objects. Have you had the chance to run an object detector on your generated image (you can build one on MSCOCO given the bounding box and label, finetune an ImageNet pretrained model). It would be interesting to see if the generated images are good enough for object detection.\n\nPost-Rebuttal: Given the work from the authors on improving the clarity of the paper as well as investigating the use of object detection metrics to compare their methods, I decided to move my rating upward to 7 ", "Dear reviewers,\n\nwe have uploaded a final revision of our paper.\nAs suggested by AnonReviewer1 we additionally evaluated the performance of our generative models by running an object detector (YOLOv3) on the generated images.\n\nWe selected the 30 most common labels from the MS-COCO data set (based on how often these labels occur in the captions of the test set) and generate images for each label-caption pair and model. On these images, we measured the recall, i.e. how often the YOLOv3 network detects the given object, and calculate the Intersection over Union (IoU) between detected objects and ground truth for the images.\n\nThe results confirm our previous observations:\n- Using the object pathway results in the YOLOv3 network detecting the given object more often, regardless of the used model (StackGAN or AttnGAN), indicating an increased image quality.\n- The StackGAN achieves a comparably high IoU (greater than 0.3 for all tested labels and greater than 0.5 for 86.7% of the tested labels).\n- The images from the AttnGAN seem to be of higher quality than images generated by the StackGAN, which leads to an even higher detection rate of the given objects by the YOLOv3 network; however, the average IoU is smaller than for the StackGAN (only 53.3% of the tested labels have an IoU greater than 0.5), probably due to the fact that the AttnGAN tends to place features of salient objects at many locations throughout the image (also observed in the other experiments).\n\nThe detailed results and exact methodology have been added to the paper as well.\n\nWe think that this additional evaluation further strengthens the results of our paper and highlights the advantages of using an object pathway in the GAN framework.\n\nWe thank all reviewers again for their valuable feedback and comments which greatly helped to improve the quality of our submission.", "Dear reviewers,\n\nthanks to your helpful feedback and replies we were able to improve our submission and we just uploaded the updated version of our paper.\n\nIn the main part of the paper we made the following improvements (we mention only the major points):\n - Related Work: we updated the related work section to highlight the differences between our work and the work by Reed et al. in more detail.\n - Approach: we made the section more self-contained, explained the GAN training procedure in more detail, described (on a high level) the general objective function that we optimize, and included formal descriptions for various parts of our model (also reflected in Fig. 1).\n - We updated Fig. 2 in a way that it now also shows the ground truth labels for all bounding boxes.\n\nIn the appendix we updated the following parts:\n - Implementation details: we describe the implementation details and all hyperparameters for all experiments in much more detail.\n - We added a figure (Fig. 6) detailing the failure cases in Fig 2. row D (right), where we move the bounding boxes iteratively from the top to the bottom of the image to study when the model breaks in generating recognizable digits.\n - We added two figures (Fig. 8 + Fig. 9) showing more variations in the location of the bounding boxes (including failure cases) on the MS-COCO data set, both for the StackGAN and the AttnGAN architecture.\n\nWe think that these changes further improve the overall quality of our submission.\nThanks again to all reviewers for their helpful comments.", "The paper proposes a simple but effective method for controlling the location of objects in image generation using generative adversarial networks. Experiments on MNIST and CLEVR are toy examples but illustrate that the model is indeed performing as expected. The experiments on COCO produce results that while containing obvious artefacts are producing output consistent with the input control signal (i.e., bounding boxes). It would however have been interesting to see more varied bounding box locations for the same caption.\n\nIn short, the paper makes an interesting addition to image generation works and likely to be incorporated into future image generation and inpainting methods.", "Dear reviewer,\nthank you very much for your feedback. In the following, we reply to your concerns one by one.\n\n----------\n\"1. Novelty: the overall framework is still conditional GAN framework. The multiple -generators-discriminators structure has been used in many other works (see the references).”\n----------\nIt is true that our overall framework falls into the class of conditional GANs, as do most other GAN frameworks that aim to gain more control over the image generation process.\n\nHowever, our model itself does not require multiple generators or multiple discriminators. In principle, the object pathway can be added to any GAN architecture (e.g. DCGAN) and is not reliant on multiple generators or discriminators, but instead augments existing generators/discriminators. Note, for example, that for our experiments on the Multi-MNIST and the CLEVR data sets we only use one generator and one discriminator. For our experiments on the MS-COCO data set we extend two common GAN architectures (StackGAN and AttnGAN) that are indeed composed of multiple generators and discriminators. However, as noted, this is not a requirement of our model but simply happened to be a characteristic of the baseline architectures we chose to use for the MS-COCO data set, based on their excellent performance. \nThank you pointing to the brand new references. There is interesting progress in the upcoming NIPS and we will make an effort to include them in our related work section where appropriate.\n\n----------\n“The global-local design is not new. Finally, compared with Reed et al. [2016], the novelty is limit.\"\n----------\nThe global-local framework is indeed used by Reed et al. (as we mention in our related work Reed et al.'s work is closely related to ours). In the following we summarize the key architectural differences between our work (focus on multiple objects per image) and Reed et al. (focus on one object per image).\nOn the generator side, Reed et al. spatially replicate the image caption at the location of the bounding box and then uses a CNN to encode this information. In contrast, we first use a dense layer, which gets as input the image caption embedding and the local bounding box label as a one-hot encoding, to obtain a new localized label which is replicated at the respective bounding box location. This step is repeated for each object in the image (the same dense layer is used to obtain each object label) and results in multiple labels which are replicated at their respective bounding box locations leading to our layout encoding which is used by the global pathway. Reed et al.’s local pathway generated image features of one centralized object. In contrast, we apply our object (local) pathway multiple times based on the previously generated labels and generate feature representations of each of the objects at the locations specified by their bounding boxes.\nOn the discriminator, Reed et al. global pathway is similar to ours. Their local pathway, however, first downsamples the full image, then concatenates it with the image caption, then crops the representation to the location of the bounding box, and then applies more convolutional layers to obtain the object features. In contrast, our object (local) pathway is applied iteratively and gets as input the image content directly (RGB values) at the location of each of the bounding boxes (not the whole image), concatenated with the respective label of that bounding box (one-hot encoding). As such, the output of our object (local) pathway are localized image features at the locations of the bounding boxes based on the image content and bounding box labels at exactly these locations. \nWe hope we could clarify some of the key differences between our architecture and the architecture by Reed et al. We will also make these differences clearer in the related work section of our updated submission.", "----------\n\"2.Motivation: I still can not tell why the proposed method is better than ones with scene layout. For me, the cost of collecting annotated data is almost the same.\"\n----------\nThank you for pointing to the lack of clarity here. In fact we think that using semantic scene layouts it not necessarily worse but just yields different properties. Our approach requires both the bounding boxes and the associated labels but arguably \"less\" information than an image scene layout (bounding box level annotation versus pixel level annotation). Nevertheless, we agree that the cost of collecting the required data may be similar.\nOn an intuitive level, if we want a human to generate an image we would usually only give them a general description (image caption) and possibly the location where we want the salient objects to be. This is essentially what we do with our model, as opposed to describing \"common sense\" knowledge in detail such as \"the sky should be in the top of the image\" and \"the grass should be at the bottom\", which is essentially what semantic scene layouts do.\nAnother advantage of using bounding boxes instead of semantic layout is that they make it easier for humans to manually change the layout of the scene (semantic scene manipulation) in potential downstream tasks since humans usually do this on a per-object basis and not on a per-pixel basis.\n\n----------\n“3. The experimental results are week. For such a task, it is difficult to find a good metric. Thus the qualitative comparison is important. I think the author should follow standard rule to do some design for user study instead of cherry pick some examples. Besides, it should include more baselines instead of StackGAN.”\n----------\nWe agree that a user study would be a valuable next step for all generative approaches within the community, but is often difficult to do because of time and resources constraints. To reduce the tendency of cherry picking, we made an effort to include both: well-working examples as well as failure cases (see e.g. Fig 2 rows D-F and last three examples of Fig. 6 and Fig. 7 respectively).\nAdditionally, we report the IS and FID values in order to provide results that are comparable with related models (see Table 1). As a direct baseline for qualitatively comparing the images on the MS-COCO data set we used both the StackGAN since it is a well-known architecture that performs well on many different data sets and the AttnGAN since it provides the current SOTA in image generation on the MS-COCO data set (based on IS score). We acknowledge that other interesting approaches are emerging e.g. from CVPR (e.g. [1-3]) and we will make an effort to provide further tests if the implementation and training are feasible in time (e.g. [1] and [2]). \n\nOverall, we would like to thank you again for your valuable feedback and concerns. We will work to implement the feedback we got and will post an updated version of our submission by the end of next week (latest on 16. November) and will let you know once the updated version is online.\n\n[1] Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network, Zhang Zizhao et al, CVPR, 2018\n[2] Image Generation from Scene Graphs, Justin Johnson et al, CVPR, 2018\n[3] Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis, Seunghoon Hong et al, CVPR, 2018", "Dear reviewer,\nthank you very much for your review.\n\nWe will update our submission with examples of images based on MS-COCO captions in which we vary the location of the various bounding boxes.\n\nWe will work to implement the feedback we got and will post an updated version of our submission by the end of next week (latest on 16. November) and will let you know once the updated version is online.", "“Another comment concerning clarity is, while it is fine to rely on previously published papers for description of our own work, you should not assume full knowledge from the reader and your paper should stand on its own without having the reader lookup for several papers to have an understanding of your training procedures. If one uses GAN training, it should be expected to cover/formulate quickly the min max game and the various losses you are trying to minimize. I am afraid that \"using common GAN procedures\" is not enough. When describing your experimental setup, pointing to another paper as \"hyperparameters remain the same as in the original training procedure\" should not be a substitution for covering it too, even if lightly in the Appendix. For instance: in the Appendix, it is mentioned that training was stopped at 20 epochs for Multi-MNIST, 40 for CLEVR... How did you decide on the epoch (early stopping, stopped before instabillity of GAN training, etc.) Did you use SGD? ADAM? Did you adjust the learning rate, which schedule? etc. for your GAN training. This information in the Appendix would make the paper overall stronger.“\n----------\nBecause of the space constrains we aimed at providing all these information via our Github repository. Hovewer, we will make an effort to also include the information about the general GAN training procedure into our approach section to make the paper more self-contained. Additionally, we will update our appendix to describe the exact training procedure and hyperparameters in more detail.\n\n----------\n“Last comment: In terms of generation multiple objects. Have you had the chance to run an object detector on your generated image (you can build one on MSCOCO given the bounding box and label, finetune an ImageNet pretrained model). It would be interesting to see if the generated images are good enough for object detection.”\n----------\nWe have not tried this yet. This is an interesting idea and would pose a good additional metric for our paper and the community. We will look into this and keep you updated, whether we can do this in time.\n\nThank you again for all your valuable comments and your feedback. We will work to implement the feedback we got and will post an updated version of our submission by the end of next week (latest on 16. November) and will let you know once the updated version is online.", "Dear reviewer,\nthank you for your response and detailed feedback. In the following, we reply to your comments sequentially.\n\n----------\n“The generator Global path is given a noise component. From the text, it does not seem that the Object path is given a noise component. Do you generate always the same object given the same label and Bounding Box then? Why not integrate some noise in this pipeline too?”\n----------\nAs you correctly observed, the object path is not given a noise component. However, there is no reason why integrating noise into the object pathway should not work and this could indeed lead to a higher sample diversity, though we did not test this.\nWe observe that not adding any noise to the object pathway does still result in “different” objects for the same label, even for the same caption. This is maybe easiest seen in the Multi-MNIST images (Fig. 2), where in some images the same digit occurs multiple times, but the style is still different for each digit. It can also, to some extent, be observed in the visualization of the object pathway on the MS-COCO data set (Fig. 5, row D). Our hypothesis is, that this is due to the upsampling in the later parts of the generator. Since we concatenate the features of the object and the global pathway (and the global pathway get as input a noise vector) we indirectly introduce the noise component at this point since the generator “merges” the feature representations of the global and the object pathways. Our intuition was to use the object pathway to generate basic, low-level features that are representative for a given object and caption, not to increase the variance of these features for increased object diversity.\n\n----------\nMulti-MNIST:\n“Could you provide the ground truth labels for each/some image/s? For the failure cases it is often not clear what digit is what. For the Row E and F, 1s could be 7s and vice versa. Since it is a qualitative study, it would be nice to have the Ground Truth (GT) (which you provide to G at for generation).”\n----------\nWe will update our images to also include the ground truth labels for each bounding box to make this clearer.\n\n----------\n“For the failure case of Row D (right) an interesting results would have been to have example of a digit bounding box from top to bottom with few pixel vertical shift to visualize when the model starts to mess-up the generation. This seems to point that your model (exposed to the location from BB for the object paths) is sensitive to what locations it has seen in training. How would you make the object path more robust to unseen location (overall you need to design an object of a given size, then locate it in your empty canvas prior to the CNN for generation)?”\n----------\nThank you, this is a very good suggestion and we will update our submission with examples where we shift the bounding boxes pixel-wise from top to bottom to see when our model starts to fail.\nIn order to make the object path more robust to unseen locations, one possibility might be to change the integration between object and global pathways, e.g. by having different object pathways operate at different resolution sizes. Since the issue with unseen locations seems to be the upsampling after the concatenation of object and global pathways the issue might be somewhat alleviated by incorporating object pathways that also work on higher parts of the generator, i.e. greater resolutions. However, we also observe that the problem with unseen locations seems to be at its worst when some locations are not observed “at all” during training (independent of the object class, i.e. if no object at all has been observed at a given location during training). As we can see in row D (left) of Fig. 2, as long as some objects are seen at a location the generator seems to be able to generalize to other objects at that location during test time. As such we speculate that the issue with unseen locations might not be crucial in real-world data, as long as we have enough data so that some kind of object is observed during training in all different locations of the images. As such, we suspect that the approach is quite robust as long as we have a somewhat balanced training set in the sense that object locations are not localized to specific areas within the image.\n\n----------\nCLEVR:\n“The images resolution make it hard to really see the shape of the images (here too, the GT would be great). The bounding boxes make the images even harder to parse. I know the colors change but \"We can confirm that the model can control both location and object's shape\". For the location, it is true, for the shape is hard to completely tell at this resolution without GT. “\n----------\nWe will also update the figures of the CLEVR data set to include the ground truth labels for each bounding box in the updated submission.", "MS-COCO:\n“Just a comment in passing on the fact that resizing images from COCO to 256x256 will inherently distort quite a bit of images, the median size (for each dim) for COCO is 640x480, if I am not mistaken. Most, if not all images in COCO are not 1.0 size ratio.”\n----------\nWe follow the implementation of StackGAN and AttnGAN in preprocessing the images, i.e. we rescale them to 268x268 pixels and then randomly crop a window of size 256x256 pixels. This will indeed distort the images, but we decided to follow the outlined procedure in order to keep our approach more comparable and to not introduce unforeseen effects during training. The implementation is technically not limited to capture any other ratio.\n\n----------\n“The quantitative results on COCO seem to confirm that the proposed method is generating \"better\" images according to IS and FID. This is a good thing, however the technique is strongly supervised (Bounding Box and Object Labels, caption compared to solely captions for StackGAN and AttGAN) so this result should be expected and really put into perspective as your are not comparing models w/ the same supervision (which you mention in the Discussion).”\n----------\nYes, this is correct. We mention this in the caption of Table 1 and the discussion, but will try to make it even clearer in the updated version.\n\n----------\nDiscussion:\n“The overlapping BBs seems to be an interesting challenge. Did you try to normalize the embeddings in overlapping area? A simple sum does not seem to be a good solution. In Figure 7 w/ overlapping zebras, the generation seems completely lost.”\n----------\nWe did not try any normalization methods for the embeddings in the overlapping area. We leave this open as future work. One feasible approach to normalize the embeddings would be to take the average of all embeddings at a given location, similarly to the Bag-of-Words approach in natural language processing. This would be an easy change in future work and might already improve the performance in the case of overlapping bounding boxes.\n\n----------\n“In terms of clarity, the paper is well-written but would benefit *greatly* from using variables names when discussing 'layout embedding', 'generated local labels', etc. Variable names and equations, while not necessary, can go a long way to clearly express a model's internal blocks (most of the papers you referenced are using this approach). The paper employs none of this commonly used standard and suffers from it. I myself had to write down on the margin the different variables used at each step described in text to have an understanding of what was done (with help of Figure 1). You should reference Figure 1 in the Introduction, as you cover your approach there and the Figure is useful to grasp your contributions.”\n----------\nThank you for adding this view. While writing the paper we considered both a formal and a conceptual description and concluded in favor of latter one. However, we will update our approach section and make our approach clearer by following your suggestions." ]
[ 6, 7, -1, -1, 8, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_H1edIiA9KQ", "iclr_2019_H1edIiA9KQ", "iclr_2019_H1edIiA9KQ", "iclr_2019_H1edIiA9KQ", "iclr_2019_H1edIiA9KQ", "SJgxVtZn3X", "SJgxVtZn3X", "SygkaFlo37", "HJlFhAnO3X", "HJlFhAnO3X", "HJlFhAnO3X" ]
iclr_2019_H1emus0qF7
Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods.
accepted-poster-papers
Strong paper on hierarchical RL with very strong reviews from people expert in this subarea that I know well.
train
[ "SJxwef4nkN", "rJelT18IJV", "rkgHckvwa7", "HJgqekK7RX", "H1lHRc8XCm", "HJe-f_zfAm", "BkgQo25K2X", "Hyel2G2eR7", "B1gHHboeAQ", "ryxRvbij6Q", "HJeNJMoia7", "HJgkpZoj6X", "H1xOtWssp7", "SyeaG7S5hQ", "SygyQWfZqm" ]
[ "author", "public", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author" ]
[ "Thanks for the close reading of the paper and the helpful feedback. Answers to your concerns are below. Let us know if you have additional questions!\n\n\"If I understood correctly, ... Is this correct?\"\nYes, your understanding is correct. To us, this was a relatively straightforward way to translate the theoretical results to a practical learning algorithm. Nevertheless, as you correctly note, this practical approximation to the theory does not carry with it any strong theoretical guarantees. The main difference between the theory and the practice is that the theory requires the objective to be optimized w.r.t. a supremum. In practice, this supremum is loosely translated to an expectation over the replay buffer. Despite the loose translation, we found the empirical results of our practical implementation to be strong. All experiments used the same hyperparameters, most of which carried over from the original HIRO formulation.\n\n\"Do you believe one can get some guarantees about using this algorithm in practice?\"\nThis question is one which is good for future work to explore. One can probably replace the supremum in the theory with an expectation over a specific policy, while at the same time weakening the sub-optimality result from being w.r.t. the optimal policy to being w.r.t. a slightly higher-reward policy (in the style of TRPO).\n\nHyperparameters:\nWe provide experimental details in Appendix C.2. Much of the original HIRO setup is maintained. The code we use is also available as open-source (link not included to maintain anonymity, although you may easily search for it online).\n\nComputation:\nEach training run was run on a single machine with ~16 CPUs and took about a day to complete.", "This is an important paper which makes several strong points and may get an oral presentation, so we can ask the authors to make it even better.\n\nActually, some aspects need to be elaborated further about the learning part.\n\nIf I understood correctly, the algorithm is starting with an empty replay buffer and a random \"goal representation\" network.\nThen the (initially random) hierarchical policy using this random goal representation is used to fill the replay buffer, from which samples are drawn to improve simultaneously the representation and the hierarchical policies. Is this correct?\n\nThe main point is then the following:\n- there must be some condition (a form of iid-ness?) on the content of the replay buffer so that the goal representation gets properly improved (and the hierarchical policy too),\n- there must be some condition on the hierarchical policy (some exploration property?) so that the replay buffer gets properly filled\n- there must be some condition on the goal representation so that the hierarchical policy does something valuable.\n\nOff-policy deep RL algorithms using a replay buffer like DDPG and TD3 are already known to be rather unstable and sensitive to hyper-parameter setting. The three conditions above make me feel that your algorithm might be even worse in that respect. You obtained good results, but can you elaborate on how hard it was to get them? Do you believe one can get some guarantees about using this algorithm in practice?\n\nBy the way, to get a better grasp on these questions, very few is specified about learning in your paper:\n- are you using TD3, as you were doing in the HIRO paper?\n- can you say a word about hyper-parameter search, final hyper-parameter setting? None is specified (replay buffer size, Huber function factor, learning rates...)\n- can you specify the computational effort (wall time, number of CPUs/GPUs...) it took to get your results?\n\nSide points:\n- It might be a good idea to give a name to your algorithm, to facilitate further references by the community\n- typo: last line of Algo 1, there is a \"=0\" which should probably be removed\n- HIRO is cited under Nachum et al. 2018a and 2018b, the paper appears twice in the references", "-- Summary --\n\nThe authors proposes a novel approach in learning a representation for HRL. They define the notion of sub-optimality of a representation (a mapping from observation space to goal space) for a goal-conditioned HRL that measure the loss in the value as a result of using a representation. Authors then state an intriguing connection between representation learning and bounding the sub-optimality which results in a gradient based algorithm. \n\n-- Clarity --\n\nThe paper is very well written and easy to follow.\n\n-- novelty --\nTo the best of my knowledge, this is the first paper that formalizes a framework for sub-optimality of goal-conditioned HRL and I think this is the main contribution of the paper, that might have lasting effect in the field. This works mainly builds on top of Nachum et al 2018 for Data Efficient HRL.\n\n-- Questions and Concerns --\n\n[1] Authors discussed the quality of the representation by comparing to some near optimal representation (x,y) in section 7. My main concern is how really “good” is the representation?, being able to recover a representation like x,y location is very impressive, but I think of a “good” representation as either a mapping that can be generalized or facilitate the learning (reducing the sample complexity). For example Figure 3 shows the performance after 10M steps, I would like to see a learning curve and comparison to previous works like Nachum et al 2018 and see if this approach resulted in a better (in term of sample complexity) algorithm than HIRO. Or an experiment that show the learned representation can be generalized to other tasks.\n\n[2] What are author's insight toward low level objective function? (equation 5 for example) I think there could be more discussion about equation 5 and why this is a good objective to optimize. For example third term is an entropy of the next state given actions, which will be zero in the case of deterministic environment, so the objective is incentivizing for actions that reduce the distance, and also has more deterministic outcome (it’s kind of like empowerment - klyubin 2015). I’m not sure about the prior term, would be great to hear authors thoughts on that. (a connection between MI is partially answering that question but I think would be helpful to add more discussion about this)\n\n[3] Distance function : In section C1 authors mention that they used L2 distance function for agent’s reward. That’s a natural choice, although would be great to study the effect of distance functions. But my main concern is that author's main claim of a good representation quality is the fact that they recovered similar to XY representation, but that might simply be an artifact of the distance function, and that (x,y) can imitate reward very well. I am curious to see what the learned representation would be with a different distance function.\n\n-- \nI think the paper is strong, and interesting however i’d like to hear authors thoughts on [2], and addressing [1] and [3] would make the paper much stronger.\n", "Helpful response! ", "Similar notions do exist in the state abstraction literature. The reviewer may already be aware of Abel, et al. \"Near Optimal Behavior via Approximate State Abstraction.\" In this work, the authors explore an MDP abstraction based on a many-to-one state abstraction mapping. A notion of sub-optimality of the state abstraction is defined, and it is similar in spirit to the notion presented in our work. One notable theoretical finding in Abel, et al. (slightly different from the reviewer's suggestion) is that states with similar Q*-values should be mapped to the same abstract state. Grouping states with similar sub-optimalities (in our sense of the word) may be possible, but harder to approach theoretically due to the dependence on \\Psi, which could potentially be very poor. \n\nIn the Abel, et al. paper there are several other theoretical insights. Unfortunately, the application of such theory to difficult control tasks is so far lacking. Many of the notions of a good state abstraction are difficult to convert to practical representation learning objectives. Moreover, the assumption of a finite, discrete, abstract state space may be very restrictive and further hamper learning. Future work in this area could be very impactful.", "I have increased my score.\n\nI like the presentation of the paper, I like empirical results. \n\n==================================\n\nThis comment has nothing much to do with the paper.\n\nOne thought though. I wonder, if the \"sub-optimality\" could (also) give rise to \"state-abstractions\" (state abstractions as in RL, where you reduce your MDP problem by converting your dense MDP to somewhat sparse MDP). The intuition is, that the states which have more or less same degree of sub-optimality can be mapped to same \"abstract state\". \n\nI am curious as to what authors have to say in this regard.", "The paper studies the problem of representation learning in the context of hierarchical reinforcement learning by building on the framework of HIRO (Nachum et al. (2018)). The papers propose a way to handle sub-optimality in the context of learning representations which basically refers to the overall sub-optimality of the entire hierarchical polity with respect to the task reward. And hence, the only practical different from the HIRO paper is that the proposed method considers representation learning for the goals, while HIRO was directly using the state space.\n\n\nI enjoyed reading the paper. The paper is very *well* written. \n\nExperimental results: The authors perform the series of experiments on various high dimensional mujoco env, and show that the representations learned using the proposed method outperforms other methods (like VAEs, E2C etc), and can recover the controllable aspect of the agent i.e the x, y co-ordinate. This is pretty impressive result.\n\nSome questions:\n\n[1] Even though the results are very interesting, I'm curious as to how hard authors try to fit the VAE baseline. Did authors try using beta VAEs (or variants like InfoVAE) ? Since the focus of the entire paper is about representation learning (as well as the focus of the conference), it is essential to make sure that baselines are strong. I would have suspected that it would have been possible to learn x,y co-ordinate in some cases while using improved version of VAE like beta VAE etc.\n\n[2] One of the major intuitions behind sub-optimality is to learn representations that can generalize, as well as can be used for continual learning (or some variant of it!). This aspect is totally missing from the current paper. It would be interesting to show that the representations learned using the proposed method can transfer well to other scenarios or can generalize in the presence of new goals, or can be sample efficient in case of continual learning. \n\nI think, including these results would make the paper very strong. (and I would be happy to increase my score!).\n\n", "\"If I understand it right, the transferred representation works better than directly learning on that env from scratch right ?\"\n\n-- Yes you are correct, when comparing to Fig 3. There are two main reasons for this: (1) The results in Fig 3 use representations learned *online* and from scratch concurrently while learning the HRL policy. This can cause difficulties in training compared to using a fixed, pre-trained representation. (2) One important component necessary to solve the \"Ant Push\" task is exploration along the x-axis, and this can be exacerbated when learning representations concurrently (if there is not much experience of exploration along the x-axis, the learning of representations in these regions will suffer). This is somewhat remedied by transferring pre-trained representations from the \"Ant Maze\" task, where the exploration problem is less pronounced and thus representations are learned well along most of the x-axis. \n\n\"Also, what are the results for HIRO ?\"\n\n-- We have now included results for HIRO (at least those which we have immediately available) in Appendix H. We note that the representation used in Nachum et al 2018 is a type of oracle - in fact, they define sub-goals as only the position-based (i.e. not velocity-based) components of the agent observation (compare this to the poor results of our “whole obs” baseline, which uses both position and velocity observations). In our own experiments, we found the HIRO method used in Nachum et al 2018 to perform similarly to the X-Y oracle in non-image tasks and similarly to the \"whole obs\" baseline in image tasks, and thus opted to show only a single “oracle” baseline and a single \"whole obs\" baseline in the main text.", "Just for clarification for better generalization, If I understand it right, the transferred representation works better than directly learning on that env from scratch right ?\n\nAlso, what are the results for HIRO ? where you are not doing representation learning for for the goals. I believe HIRO would more or less perform the same. It would be useful to add. ", "We thank the reviewer for the valuable feedback. We are glad that the reviewer found the paper well-written and exceptionally novel. Our responses to the reviewer’s main points are below, and we hope with these clarifications and additional results, the reviewer will find the paper much stronger. Please let us know if this addresses your points, we would be happy to discuss further! But if your suggestions have been addressed, we would appreciate it if you would revise your review.\n\n[1]\n“My main concern is how really “good” is the representation?”\n\nThe results in Figure 3 are our most significant. They show that the sub-optimality theory we develop holds up in practice: We are able to learn representations (in an online fashion, with no additional supervision) which allow for learning a well-performing hierarchical policy, and this ability is significantly better than competing methods.\n\n“I would like to see… an experiment that shows the learned representation can be generalized to other tasks.”\n\nWe have followed the reviewer’s suggestions and have included in the paper (see Appendix F) additional results showing the generalization capability of the learned representations. We take representations learned on one task (Ant Maze) and apply them to different tasks (a slight variant of Ant Maze and Ant Push). Our representations are successful at training a policy on these distinct environments, thus showing the generalization capabilities of the learned representations across similar environments. \n\n“I think of a “good” representation as a mapping that can ... facilitate the learning (reducing the sample complexity). I would like to see a learning curve and comparison to previous works like Nachum et al 2018”\n\nThe representation used in Nachum et al 2018 is a type of oracle - in fact, they define sub-goals as only the position-based (i.e. not velocity-based) components of the agent observation (compare this to the poor results of our “whole obs” baseline, which uses both position and velocity observations). In our own experiments, we found the HIRO method used in Nachum et al 2018 to perform similarly to the X-Y oracle in non-image tasks and similarly to the \"whole obs\" baseline in image tasks, and thus opted to show only a single “oracle” baseline and a single \"whole obs\" baseline.\n\nAs for facilitating learning: When one uses HRL, much of the ease-of-learning is induced automatically - e.g., the higher-level controller operates and receives rewards at a much lower temporal frequency, hence learning and exploration improves. On the other hand, the use of a lower-level goal-conditioned policy as an interface between the higher-level controller and the environment introduces a potential issue in how expressible the high-level controller can be. This is the issue we focused on in our paper. We view HIRO (Nachum et al 2018) as a work showing that with HRL, one can learn much better and faster than shallow policies; our work then shows that in general settings, where a goal representation may not be easy to handcraft (e.g. images), one can still learn provably near-optimal representations. And indeed, it is important to note that our presented results are better than the shallow baselines (and several of the hierarchical baselines) evaluated in the HIRO paper, showing that we can get good HRL performance on these difficult tasks while incorporating absolutely zero prior knowledge into the learning algorithm.\n\n[2] \n“What are author's insight toward low level objective function? (equation 5 for example)”\n\nThe form of Equation 5 (or Equation 9) is particularly chosen so that it corresponds to a KL (this is crucial for the sub-optimality proof in the Appendix). Specifically, Equation 5 defines the low-level objective as a negative KL between P(s’| s, a) and a distribution U(s’| s, g) proportional to \\rho(s’) * exp(-D(f(s’), g)). We have updated the paper to make this observation clear.\n\n“... third term is an entropy of the next state given actions, which will be zero in the case of deterministic environment, so the objective is incentivizing for actions that reduce the distance, and also has more deterministic outcome.”\n\nAs the KL insight above makes clear, the objective does not necessarily encourage deterministic outcomes. Instead, it encourages the lower-level policy to find actions whose next-state distribution is similar to U(s’| s, g) (which is stochastic in general).\n\n\n", "We thank the reviewer for the careful reading of the paper. We are glad that the reviewer found the paper interesting and exceptionally well-written. Our responses to the reviewer’s main feedback is below. We hope that with the additional results, the reviewer will find the paper stronger. Please let us know if this addresses your points, we would be happy to discuss further! But if your suggestions have been addressed, we would appreciate it if you would revise your review.\n\nFor the VAE baseline, we performed tuning on the standard deviation of the Gaussian prior (finding std=10 to give best qualitative and quantitative results). As the reviewer suggests, we have updated the paper (see Appendix E) to include results of VAE with varying beta (coefficient on KL prior). Despite this additional hyperparameter tuning, results of the VAE are still lackluster. The drawback of the VAE is that it is encouraged to reconstruct the entire observation, despite the fact that much of it is unimportant and possibly exhibits high variance (e.g. as in agent joint velocities). This means that outside of environments with homogeneous and high-information state observation features, a VAE approach to representation learning will suffer. And indeed, experimentally we find that the VAE can perform well on simple environments like point mass, which has only 6 observation dimensions, of which the velocities are often close to zero.\nRepresentations that generalize well are indeed very important. \n\nEmpirically, we have followed the reviewer’s suggestions and have included in the paper (see Appendix F) additional results showing the generalization capability of the learned representations. We take representations learned on one task (Ant Maze) and apply them to different tasks (a slight variant of Ant Maze and Ant Push). Our representations are successful at training a policy on these distinct environments, thus showing the generalization capabilities of the learned representations across similar environments. \n\nTheoretically, we believe that our bounds may be extended to robust control settings, where representations should be robust to slight perturbations in transition dynamics (the KL divergence associated with the perturbations can be incorporated to our sub-optimality results). A more in-depth theoretical treatment of generalization is a good open question for future work.\n", "We thank the reviewer for the careful reading of the paper. We are glad that the reviewer believes the paper to be a significant contribution to the field of hierarchical RL. Our responses to the reviewer’s comments are below. We are happy to discuss further if the reviewer has additional comments.\n\n“make sure to define upfront which representation you are referring to”\n-- We have updated the introduction to make this clear.\n\n“I think that you need to provide some more motivation as to why think the representation learning of $f$ should be equated with the problem of maximizing the return.“\n-- The issue we believed to be most pressing when using representations in HRL is that of expressibility. Although the optimal policy may no longer be expressible, one can still hope to approximately express the optimal policy. To make this notion more approachable, we defined sub-optimality in terms of state values, and this leads to the analysis presented in the paper. It is also more practical to define sub-optimality in this way for the simple reason that a simple mathematical formulation of this sub-optimality is easy to express, compared to notions like the reviewer’s suggested “ease of learning,” which is harder to quantify in the context of deep RL. We have updated some of the phrasing in Sec 3 to make this clearer.\n\n“A limitation of this work is also that the analysis for the temporally extended version of the low-level controller is restricted to open-loop policies.”\n-- We would like to clarify that the theoretical statements (Theorem 3 and Claim 4) apply to any set of candidate policies \\Pi, including possibly closed-loop policies. We have updated the paper to make this clear. \n\n“Relevant work (it's up to you to include or not):“\n-- Thank you for these references. We will add them to the paper.\n", "[3] \n“My main concern is that author's main claim of a good representation quality is the fact that they recovered similar to XY representation, but that might simply be an artifact of the distance function, and that (x,y) can imitate reward very well.” \n\n\nWe would like to emphasize that the representation learning objective is agnostic to task reward; i.e., unlike possibly other RL representation learning objectives, there is no reward prediction based on learned representations.\n\nThe only way in which task reward may affect learning is by the distribution of states/goals induced by a higher-level controller trained to maximize reward. Thus, to address the reviewer’s concerns, we have repeated the experiment in Figure 2 for our proposed representation learning objective (see Appendix G), but learned with a uniformly random higher-level controller. We find that the learned representations can still recover representations similar to x-y coordinates.", "The problem setting considered in this paper is that of the recent wave of \"goal-conditioned\" formulations for hierarchical control in Reinforcement Learning. In this problem, a low-level controller is incentivized to reach a goal state designated by a higher-level controller. This goal is represented in an abstract (embedding) multi-dimensional vector space. Establishing \"closeness to goal\" entails the existence of some distance metric (assumed to be given an fixed) and a function $f$ which can project states to their corresponding goal representation. The \"representation learning\" problem referred to by the authors pertains to this function. The paper is built around the question: how does the choice of $f$ affects the expressivity of the class of policies induced in the lower level controller, which in turn affects the optimality of the overall system. The authors answer this question by first providing a bound on the loss of optimality due to the potential mismatch between the distribution over next states under the choice of primitive actions produced by a locally optimal low-level controller. The structure of the argument mimics that of model compression methods based on bisimulation metrics. The model compression here is with respect to the actions (or behaviors) rather than states (as in aggregation/bismulation methods). In that sense, this paper is a valuable contribution to the more general problem of understanding the nature of the interaction between state abstraction and temporal abstraction and where the two may blend (as discussed by Dietterich and MAXQ or Konidaris for example). Using the proposed bounds as an objective, the authors then derive a gradient-based algorithm for learning a better $f$. While restricted to a specific kind of temporal abstraction model, this paper offers the first (to my knowledge) clear formulation of \"goal-conditioned\" (which I believe is an expression proposed by the authors) HRL fleshed out of architectural and algorithmic considerations. The template of analysis is also novel and may even be useful in the more general SMDP/options perspective. I recommend this paper for acceptance mostly based on this: I believe that these two aspects will be lasting contributions (much more than the specifics of the proposed algorithms). \n\n\n# Comments and Questions\n\nIt's certainly good to pitch the paper as a \"representation learning\" paper at a Representation Learning conference, but I would be careful in using this expression too broadly. The term \"representation\" can mean different things depending on what part of the system is considered. Representation learning of the policies, value functions etc. I don't have specific recommendations for how to phrase things differently, but please make sure to define upfront which represention you are referring to. \n\nRepresentation learning in the sense of let's say Baxter (1995) or Minsky (1961) is more about \"ease of learning\" (computation, number of samples etc) than \"accuracy\". In the same way, one could argue that options are more about learning more easily (faster) than for getting more reward (primitive options achieve the optimal). Rather than quantifying the loss of optimality, it would be interesting to also understand how much one gains in terms of convergence speed for a given $f$ versus another. I would like to see (it's up to you) this question being discussed in your paper. In other words, I think that you need to provide some more motivation as to why think the representation learning of $f$ should be equated with the problem of maximizing the return. One reason why I think that is stems from the model formulation in the first place: the low-level controller is a local one and maximizes its own pseudo-reward (vs one that knows about other goals and what the higher level controller may do). It's both a feature, and limitation of this model formulation; the \"full information\" counterpart also has its drawbacks.\n\nA limitation of this work is also that the analysis for the temporally extended version of the low-level controller is restricted to open-loop policies. The extension to closed-loop policies is important. There is also some arbitrariness in the choice of distance function which would be important to study. \n\nRelevant work (it's up to you to include or not): \n\n- Philip Thomas and Andrew Barto in \"Motor Primitive Discovery\" (2012) also talk about options-like abstraction in terms of compression of action. You may want to have a look. \n\n- Still and Precup (2011) in \"An information-theoretic approach to curiosity-driven reinforcement learning\" also talk about viewing actions as \"summary of the state\" (in their own words). In particular, they look at minimizing the mutual information between state-action pairs. \n\n- More generally, I think that the idea of finding \"lossless\" subgoal representations is also related to ideas of \"empowerment\" (the line of work of Polani).", "We would like to make readers aware of a few typos in Appendix B. Namely:\n\n-- Eq 42 overloads the use of \\pi. The equation should be changed to be an argmin over \\pi'\\in\\Pi. Accordingly, the use of P_\\pi in the KL should be changed to P_{\\pi'}. The other argument K(-|-,\\pi) of the KL remains as-is.\n\n-- The LHS of Eq 42 should have P_{\\Psi(s_t, \\varphi(s_t, \\pi))} in the KL (the current form does not include the 't' subscripts and is missing a closing parens)." ]
[ -1, -1, 8, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 9, -1 ]
[ -1, -1, 3, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, -1 ]
[ "rJelT18IJV", "iclr_2019_H1emus0qF7", "iclr_2019_H1emus0qF7", "H1xOtWssp7", "HJe-f_zfAm", "Hyel2G2eR7", "iclr_2019_H1emus0qF7", "B1gHHboeAQ", "HJeNJMoia7", "rkgHckvwa7", "BkgQo25K2X", "SyeaG7S5hQ", "ryxRvbij6Q", "iclr_2019_H1emus0qF7", "iclr_2019_H1emus0qF7" ]
iclr_2019_H1eqjiCctX
Understanding Composition of Word Embeddings via Tensor Decomposition
Word embedding is a powerful tool in natural language processing. In this paper we consider the problem of word embedding composition \--- given vector representations of two words, compute a vector for the entire phrase. We give a generative model that can capture specific syntactic relations between words. Under our model, we prove that the correlations between three words (measured by their PMI) form a tensor that has an approximate low rank Tucker decomposition. The result of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings. We also complement our theoretical results with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method.
accepted-poster-papers
AR1 is concerned about lack of downstream applications which show that higher-order interactions are useful and asks why not to model higher-order interactions for all (a,b) pairs. AR2 notes that this submission is a further development of Arora et al. and is satisfied with the paper. AR3 is the most critical regarding lack of explanations, e.g. why linear addition of two word embeddings is bad and why the corrective term proposed here is a good idea. The authors suggest that linear addition is insufficient when final meaning differs from the individual meanings and show tome quantitative results to back up their corrective term. On balance, all reviewers find the theoretical contributions sufficient which warrants an accept. The authors are asked to honestly reflect all uncertain aspects of their work in the final draft to reflect legitimate concerns of reviewers.
train
[ "SJgh2tslAQ", "Skeg0uolRm", "HyeFqDog0X", "Skg5uUslRX", "rkeMbIWjn7", "SJgejqZqhQ", "Skg4J4xt27" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revision of the paper that incorporates suggestions of the reviewers and expands on experimental results. The largest changes are in Section 5 on the experimental verification, where we include the results of our experiments on verb-object phrases (previously we only showed results for adjective-noun phrases). ", "We thank the reviewer for reading and evaluating our submission.\n\nAdditive composition vs. tensor: as discussed in our introduction (and illustrated by the qualitative results in Tables 1 and 2), we believe that linear addition of two word embeddings may be an insufficient representation of the phrase when the combined meaning of the words differs from the individual meanings. Syntactically related word pairs such as adjective-noun and verb-object pairs can have this property. The tensor term can capture the specific meaning of the word pair taken as a whole, as evidenced by qualitative and quantitative evaluations.\n\nRAND-WALK and syntax: we will clarify this point more carefully: what we mean is that the RAND-WALK model itself does not treat syntactically related word-pairs different from other word pairs. From a purely model perspective, in the RAND-WALK model each word is generated independent of all others given the discourse vector, hence the model itself does not account for syntactic relationships between words. Certainly the word embeddings trained based on this model may capture syntactic information that is communicated through the co-occurrence statistics of the training corpus, which allows their embeddings to perform decently on syntactic analogy tasks. Our goal is to explicitly model syntactic dependencies in the context of a word embedding model, in the hopes that the learned embeddings might capture additional information that is missed in non-syntax-aware embedding models. \n\nWeighting the tensor term: we don't expect that our model or any other model will correspond perfectly with how humans use language in practice. When it comes to tasks such as predicting phrase similarity, we give our model a bit of extra flexibility to account for this discrepancy. We also note that previous works on embedding composition also explore various re-weighting schemes. While the meaning of the weighting parameter isn't a central question in our work, one can think of it as the degree to which specific knowledge of the syntactic relationship between the two words affects the phrase's overall meaning.\n\nVerifying assumptions in our model: we note that in section 5 of the paper, we verify the assumptions and concentration phenomena introduced in our model. ", "We are grateful to the reviewer for their time and effort in reading our paper and providing feedback.\n\nGenerative model assumptions: our model is an expansion of the original RAND-WALK model of Arora et. al., with the purpose of accounting for syntactic dependencies. The additional assumptions we include and the concentration phenomena we prove theoretically are verified empirically in section 5, so our results do hold up on real data. \n\nUse on downstream tasks: we believe that capturing syntactic relationships using a tensor can be useful for some downstream tasks, since our results in the paper suggest that it captures additional information above and beyond the standard additive composition. However, as the main goal of this paper is to introduce and analyze the model, we defer more application-focused analysis to future work.\n\nInteraction between arbitrary word pairs: our model introduces the tensor in order to capture syntactic relationships between pairs of words, such as adjective-noun and verb-object pairs. While it might be interesting to try to capture interactions between all pairs of words, that is not justified by our model and we didn't explore it. However, we also trained our model using verb-object pairs, and we have updated section 5 as well as the appendix to include these additional results. \n\nComparison to Arora, Liang, Ma ICLR 2017: we appreciate the suggestion to include a comparison with the SIF embedding method of Arora et. al., as this method is also obtained from a variant of the original RAND-WALK paper. We have updated Table 2 and the discussion in section 5 to include these additional results. As reported in their paper, the SIF embeddings yield a strong baseline for sentence embedding tasks, and we find the same to be true in the phrase similarity task for adjective-noun phrases (not so for verb-object phrases). However, we find that we can improve upon the SIF performance by addition of the tensor component from our model. (We note that we have just used the tensors trained in our original model; it is possible that combining the model in SIF and syntactic RAND-WALK more carefully could give even better results.)\n\nAdditional citations: we have updated the paper to include both additional citations.", "We thank the reviewer for their time and response to our paper. \n\nPhrase similarity results: the tensor component T(v_a,v_b,.) does yield improvement over all other weighted additive methods in 5 out of 6 cases, as shown in Table 3. We have also updated that table with additional results, which show that adding in the tensor component improves upon the strong baseline of the SIF embedding method. We also added Table 4, which repeats the phrase-similarity task for verb-object pairs, and shows that the tensor component leads to improvement in most cases. ", "The paper deals with further development of RAND-WALK model of Arora et al. There are stable idioms, adjective-noun pairs and etc that are not covered by RAND-WALK, because sometimes words from seemingly different contexts can join to form a stable idiom. \n\nSo, the idea of paper is to introduce a tensor T and a stable idiom (a,b) is embedded into v_{ab}=v_a+v_b+T(v_a, v_b,.) and is emitted with some probability p_sym (proportional to exp(v_{ab} times context)). The latter model is similar to RAND-WALK, so it is not surprising that statistical functions there are similarly concentrated. Finally, there exists an expression, PMI3(u,v,w), that shows the correlation between 3 words, and that can be estimated from the data directly. It is proved that Tucker decomposition of that tensor gives us all words embeddings together with tensor T. Thus, from the latter we will obtain a tool for finding embeddings of idioms (i.e. v_a+v_b+T(v_a, v_b,.)).\n\nTheoretical analysis seems correct (I have not checked all the statements thoroughly, but I would expect formulations to be true). The only problem I see is that phrase similarity part is not convincing. I cannot understand from that part whether T(v_a, v_b,.) addition to v_a+v_b gives any improvement or not.", "The authors consider the use of tensor approximations to more accurately capture syntactical aspects of compositionality for word embeddings. Given two words a and b, when your goal is to find a word whose meaning is roughly that of the phrase (a,b), a standard approach to to find the word whose embedding is close to the sum of the embeddings, a + b. The authors point out that others have observed that this form of compositionality does not leverage any information on the syntax of the pair (a,b), and the propose using a tensor contraction to model an additional multiplicative interaction between a and b, so they propose finding the word whose embedding is closest to a + b + T*a*b, where T is a tensor, and T*a*b denotes the vector obtained by contracting a and b with T. They test this idea specifically on the use-case where (a,b) is an adjective,noun pair, and show that their form of compositionality outperforms weighted versions of additive compositionality in terms of spearman and pearson correlation with human judgements. In their model, the word embeddings are learned separately, then the tensor T is learned by minimizing an objective whose goal is to minimize the error in predicting observed trigram statistics. The specific objective comes from a nontrivial tensorial extension of the original matricial RAND-WALK model for learning word embeddings.\n\nThe topic is fitting with ICLR, and some attendees will find the results interesting. As in the original RAND-WALK paper, the theory is interesting, but not the main attraction, as it relies on strong generative modeling assumptions that essentially bake in the desired results. The main appeal is the idea of using T to model syntactic interactions, and the algorithm for learning T. Given that the main attraction of the paper is the potential for more performant word embeddings, I do not believe the work will have wide appeal to ICLR attendees, because no evidence is provided that the features from the learned tensor, say [a, b, T*a*b], are more useful in downstream applications than [a,b] (one experiment in sentiment analysis is tried in the supplementary material with no compelling difference shown).\n\nPros:\n- theoretical justification is given for their assumption that the higher-order interactions can be modeled by a tensor\n- the tensor model does deliver some improvement over linear composition on noun-adjective pairs when measured against human judgement\n\nCons:\n- no downstream applications are given which show that these higher order interactions can be useful for downstream tasks.\n- the higher-order features T*a*b are useful only when a is noun and b is an adjective: why not investigate using T to model higher-order interaction for all (a,b) pairs regardless of the syntactic relationships between a and b?\n- comparison should be made to the linear composition method in the Arora, Liang, Ma ICLR 2017 paper \n\nSome additional citations: \n- the above-mentioned ICLR paper provides a performant alternative to unweighted linear composition\n- the 2017 Gittens, Achlioptas, Drineas ACL paper provides theory on the linear composition of some word embeddings\n ", "\n\nThe authors suggest a method to create combined low-dimensional representations for combinations of pairs of words which have a specific syntactic relationship (e.g. adjective - noun). Building on the generative word embedding model provided by Arora et al. (2015), their solution uses the core tensor from the Tucker decomposition of a 3-way PMI tensor to generate an additive term, used in the composition of two word embedding vectors.\n\nAlthough the method the authors suggest is a plausible way to explicitly model the relationship between syntactic pairs and to create a combined embedding for them, their presentation does not make this obvious and it takes effort to reach the conclusion above. Unlike Arora's original work, the assumptions they make on their subject material are not supported enough, as in their lack of explanation of why linear addition of two word embeddings should be a bad idea for composition of the embedding vectors of two syntactically related words, and why the corrective term produced by their method makes this a good idea. Though the title promises a contribution to an understanding of word embedding compositions in general, they barely expound on the broader implications of their idea in representing elements of language through vectors.\n\nTheir lack of willingness to ground their claims or decisions is even more apparent in two other cases. The authors claim that the Arora's RAND-WALK model does not capture any syntactic information. This is not true. The results presented by Arora et al. indeed show that RAND-WALK captures syntactic information, albeit to a lesser extent than other popular methods for word embedding (Table 1, Arora et al. 2015). Another unjustified choice by the authors is their choice of weighing the Tensor term (when it is being added to two base embedding vectors) in the phrase similarity experiment. The reason the authors provide for weighing the composition Tensor is the fact that in the unweighted version their model produced a worse performance than the additive composition. One would at least expect an after-the-fact interpretation for the weighted tensor term and what this implies with regard to their method and syntactic embedding compositions in general.\n\nArora's generative model for word embeddings, on which the current paper is largely based upon, not only make the mathematical relationship among different popular word embedding methods explicit, but also by making and verifying explicit assumptions with regard to properties of the word embeddings created by their model, they are able to explain why low-dimensional embeddings provide superior performance in tasks that implicate semantic relationships as linear algebraic relations. Present work, however interesting with regard to its potential implications, strays away from providing such theoretical insights and suffices with demonstrating limited improvements in empirical tasks." ]
[ -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, 2, 4, 3 ]
[ "iclr_2019_H1eqjiCctX", "Skg4J4xt27", "SJgejqZqhQ", "rkeMbIWjn7", "iclr_2019_H1eqjiCctX", "iclr_2019_H1eqjiCctX", "iclr_2019_H1eqjiCctX" ]
iclr_2019_H1ersoRqtm
Structured Neural Summarization
Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.
accepted-poster-papers
This paper examines ways of encoding structured input such as source code or parsed natural language into representations that are conducive for summarization. Specifically, the innovation is to not use only a sequence model, nor only a tree model, but both. Empirical evaluation is extensive, and it is exhaustively demonstrated that combining both models provides the best results. The major perceived issue of the paper is the lack of methodological novelty, which the authors acknowledge. In addition, there are other existing graph-based architectures that have not been compared to. However, given that the experimental results are informative and convincing, I think that the paper is a reasonable candidate to be accepted to the conference.
train
[ "S1e6m3190m", "Sygyxl0IhX", "HygQkgOY07", "HkexklOFRX", "rylTptBSR7", "BkxA0BqW0m", "rygnAxKlRQ", "BkehgP1qaQ", "B1g-t4TSTX", "SJlxsmaBpm", "ByxzV76S6X", "BJgtlQaBTm", "S1xZrF4J6m", "HygI4PN52m" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In light of the extensive new experiments and their conclusions, I indeed think that this paper is now much stronger. I have changed my original score from 4 to 7.", "Note: I changed my original score from 4 to 7 based on the new experiments that answer many of the questions I had about the relative performance of each part of the model. The review below is the original one I wrote before the paper changes.\n\n# Positive aspects of this submission\n\n- The intuition and motivation behind the proposed model are well explained.\n\n- The empirical results on the MethodNaming and MethodDoc tasks are very promising.\n\n# Criticism\n\n- The novelty of the proposed model is limited since it is essentially adding an existing GGNN layer, introduced by Li et al. (2015), on top of an existing LSTM encoder. The most important novelty seems to be the custom graph representation for these sequence inputs to make them compatible with the GGNN, which should then deserve a more in-depth study (i.e. ablation study with different graph representations, etc).\n\n- Since you compare your model performance against Alon et al. on Java-small, it should be fair to report the numbers on Java-med and Java-large as well.\n\n- The \"GNN -> LSTM+POINTER\" experiment results are reported on the MethodDoc task, but not for MethodNaming. Reporting this number for MethodNaming is essential to show the claimed empirical superiority of the hybrid encoder compared to GNN only.\n\n- I have doubts about the usefulness of the proposed model for natural language summarization, for the following reasons:\n\n - The comparison of the proposed model for NLSummarization against See et al. is a bit unfair, since it uses additional information through the CoreNLP named entity recognizer and coreference models. With the experiments listed in Table 1, there is no way to know whether the increased performance is due to the hybrid encoder design or due the additional named entity and coreference information. Adding the entity and coreference data in a simpler way (i.e. at the token embedding level with a basic sequence encoder) in the ablation study would very useful to answer that question.\n\n - In NLSummarization, connecting sentence nodes using a NEXT edge can be analogous to using a hierarchical encoder, as used by Nallapati et al. (\"Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond\", 2016). Ignoring the other edges of the GNN graph, what are the theoretical and empirical advantages of your method compared to this sentence-level hierarchical encoder?\n\n - Adding the coverage decoder introduced by See et al. to your model would have been very useful to prove that the current performance gap is indeed due to the simplistic decoder and not something else.\n\n- How essential is the weighted averaging for graph-level document representation (Gilmer et al. 2017) compared to uniform averaging?\n\n- A few minor comments about writing:\n - In Table 1, please put the highest numbers in bold to improve readability\n - On page 7, the word \"summaries\" is missing in \"the model produces natural-looking with no noticeable negative impact\"\n - On page 9, \"cove content\" should be \"core content\"\n", "We have added the results of a number of additional experiments to more clearly evaluate the effect of our contribution. On natural language summarization, our new Table 2 shows that (a) additional semantic information seem to not help sequence-based models; (b) using only syntactical, but not semantical information in the Sequence GNN setting is helpful, but larger gains are made when semantic information is included; and (c) these results become even starker when considering a fairer baseline (using the same codebase as ours), instead of results from another paper (with its own specialized hyperparameter tuning).\n\nWhile we plan to provide these results eventually, we identified another issue in our implementation of the coverage mechanism (due to which loss was not correctly normalized), and so this may take some more days. However, we believe that while these additional results will further improve the experimental evaluation, they are not crucial to document the value of our contribution.\n\nOverall, we would kindly request that you reconsider your rating given the additional experimental results, or provide further guidance on how to improve the paper.", "The authors have posted new experimental results. Do you think that these have addressed some of your concerns?", "We have updated the paper with additional experiments that the reviewers showed interest in (Table 2). Unfortunately, due to a problem with integrating the “coverage” idea of See et al. into our codebase and the long training times for these models, we were unable to update the paper with these results so far. However, we expect to provide these results over the next few days. We note that the time since the start of the rebuttal period was too short to do hyperparameter optimization on the CNN/DailyMail summarization dataset, and we instead used the same hyperparameters we used before (without coverage).\n\nConcretely, we provide the following additional experimental results:\n\n•\tWe have added the GNN->LSTM+pointer model for the Method Naming task as requested by Reviewer 3. The results show that removing the biLSTM encoder worsens the results compared to our biLSTM+GNN->LSTM+pointer network.\n\n•\tWe ran biLSTM encoder-based baselines for the CNN/DM task using the OpenNMT implementation, to better compare with our extension of that codebase. Despite using the same setup as See et al. our experiments yield slightly worse results. This is most likely due to the fact that we have not performed separate hyperparameter optimization for each task but instead use identical hyperparameters for _all_ our tasks and datasets. Our biLSTM+GNN results are most fairly compared to these baselines.\n\n•\tAs discussed in our last post, we have performed experiments 1-3 on CNN/DM to analyze the influence of the extra information provided by the CoreNLP parser. The results can be summarized as follows:\n\n o\t[Experiment 1] We ran a biLSTM encoder with access to the CoreNLP parse information. Concretely, we extended the token embedding with an embedding of the per-token information provided by the parser, and additionally added tags marking references using fresh “<REF1>”, “<REF2>”, … tokens. Our results indicate that this only minimally improves results compared to the standard biLSTM encoder operating on words, and hence that exposing the structure explicitly by using a GNN encoder provides more advantages.\n\n o\t[Experiment 2] Removing all linguistic structure, i.e. Stanford CoreNLP edges, but retaining the extra sentence nodes and edges, yields a small improvement over the baseline biLSTM-based encoder, increasing ROUGE-2 score by one point and yielding minor differences in the other metrics.\n\n o\t[Experiment 3] When adding edges that connect tokens with identical stemmed string representations, the performance increases a bit but does not reach the performance levels comparable to using the full coreference resolution.\n\n\nWe have clarified the above points in the text. In conclusion, pending on the coverage experiment, the above experiments demonstrate that:\n\na)\tNeither biLSTM, nor GNN encoders alone achieve best performance in summarization tasks and the biLSTM+GNN combination improves on all baselines in all cases.\n\nb)\tEncoding additional linguistic structure is helpful for natural language summarization but cannot be captured adequately using only a standard biLSTM encoder.\n\nc)\tEncoding sentence and connecting long-distance tokens with the same stems only slightly helps performance, while using more advanced resolution of references yields bigger gains.\n\nFinally, we would like to emphasize again the broad applicability of our summarization method to both natural language and source code. While the natural language summarization task is clearly the most interesting for the reviewers, our summarization model is also able to compete with (and beat) specialized approaches on the two source code tasks on three datasets.\n", "The overall concept in Marcheggiani and Titov's work is similar, but we generalise it in four ways:\n (1) We consider a wider range of sequence encoders.\n (2) We show that the resulting GNN structure is useful for sequence decoding, with attention over the generated inputs.\n (3) We consider a wider range of different tasks, with different graph structures.\n (4) We incorporate semantic and across-sentence relationships, instead of only syntactic relationships.\n \nWhile this work tackles the same problem as we do (namely, modeling long-distance\nrelationships in NLP) and uses the same fundamental idea (namely, modeling\nrelationships in graphs), we feel that our work provides the empirical evidence\nthat the idea is widely applicable, both across diverse modelling choices and\ntask choices.\n \nBastings et al. provide a follow-up on that work, focusing on aspect (2), adding\na sequence decoder. Similarly, De Cao et al. build on a similar idea, but focus\non aspect (4), but do not introduce intra-document relationships, but instead use\nthe graph structure to reflect an entity graph. This does not use end-to-end training\nfor the sequential structure of the natural language (they use pre-trained, fixed\nELMo).\n \n\nOverall, we believe our contribution to generalise in all dimensions (1)-(4), hopefully\nproviding enough experimental evidence so that all researchers working on sequential\ndata with some inherent structure will consider mixed sequence/graph models in the\nfuture. This is why we included non-natural language tasks (but with obvious graph\nstructure), showing the wide applicability of the idea.", "Hello!\nI had a follow-up question regarding related work: even given the response it still wasn't clear to me the differences and advantages of the proposed method, both theoretically and empirically, compared to previous work incorporating graph structures on the input side of sequence-to-sequence models. Even if the task is different, the methodology seems like it would be largely similar, so these methods would be reasonable baselines. Without a comparison it makes it a bit difficult to tell the merit of this particular work. Would you mind elaborating?", "Thank you for your reply and useful clarifications. The additional experiments you proposed may greatly enhance the quality of your paper indeed. My rating is subject to change depending on the outcome of these experiments.", "Thanks for your detailed comments, which we will integrate your comments in the next version of our paper.\n\nOn novelty:\n\nWe agree that we are not contributing fundamentally new models here – indeed, we refrained from introducing a more complex architecture to make it easy to adopt this modeling approach. We believe that our work introduces a simple way to fuse state-of-the-art sequence (not only LSTMs, but /any/ sequence encoder) learning with reasoning enabled by domain-specific graph constructions. We have not found this idea in prior work, and our experiments show the value across three different tasks from different domains. We hope that other researchers can profit from our work by integrating similar techniques into their own architectures and believe that this deserves publication and wider dissemination.\n\nAs discussed in our reply to all reviewers, we will run additional experiments on the CNN/DM to analyze the influence of different graph constructions.\n\n\nOn GNN->LSTM+pointer on MethodNaming:\n\nWe decided to show this ablation experiment only on the MethodDoc task for presentation reasons, but we will rerun the model and provide additional results on the MethodNaming task in our next revision.\n\n\nOn comparison with Alon et al. 2018 on the Java-Large corpus:\n\nWe did run these experiments but realized that we could obtain best results by models that “felt” like they had too much capacity. Further analysis of this behavior traced this to a problem with a duplication of samples in the dataset. For example, about 30.7% of files in the Java-Large are near-duplicates of other files in the corpus (across all folds), indicating that results on these datasets primarily measure overfitting to the data. We managed to train competitive models, but only by choosing very large sizes for the hidden dimensions (>1000) and removing dropout. In contrast, Java-Small only has 3.0% duplicates. We will clarify this in the next version of our paper. [This is similar to our experiences with the Barone & Sennrich dataset discussed in Sect. 4.1.2.]\n\n\nOn NL Summarization and additional information:\n\nWe agree that our model uses additional information that is not available to the pure sequence models – indeed, we believe that the ability to use this information is the core contribution of our work. Indeed, it is unclear how to add information from the CoreNLP parser to a standard sequence model (how, for example, are coreference connections represented?). As discussed in our reply to all reviews, we will run additional experiments to further elucidate this effect. Primarily, we will run an LSTM baseline that uses additional per-token information in the embedding of words, and additionally will introduce fresh tokens (“<REF1>”, …) to mark points at which references are made. If you had other comparisons in mind, please do react quickly, as these experiments do take a bit of time...\n\n\nOn comparison with Nallapati et al. 2016:\n\nThe structure of the “Next” tokens in the graph model resembles that of Nallapati et al. (2016). However, the core difference is in how message-passing GNNs work. In Nallapati et al. (2016) computing the representations this is truly hierarchical, I.e. information flows in one direction: sentence representations are computed, then these are combined into a document representation. In a GNN, messages are passed in both directions, and thus our per-sentence nodes also allow the exchange of information between different tokens in the same sentence. Hence, our model is more comparable to a hierarchical setting in which information can flow both up and down.\n\n\nOn using coverage:\n\nWe wanted to avoid the additional work for this experiment, since we believe that the improvements from adding a coverage mechanism are orthogonal to the ones provided by our model but will now run this and provide the results once the experiments have finished.\n\n\nOn weighted averaging:\n\nIn past experiments on a variety of datasets and tasks, we have found that weighted averaging helps compared to uniform averaging. We believe that this is due to the fact that weighted averaging acts as an attention-like mechanism that allows the model to pick the salient information from the graph while allowing the message-passing to “freely” transfer information. Since this is also the accepted method in the GNN literature (e.g. Gilmer et al. 2017) we did not further experiment with this design decision. As our compute resources are limited, we want to avoid rerunning this ablation on the CNN/DM dataset, but will provide additional experiments on the two smaller tasks. \n\n\nPlease, let us know if these do not sufficiently address the concerns you raise in your review and what alternative experiments are missing.", "Thanks for your thoughtful review and your time. As discussed in our reply to all reviews, we will run four additional experiments covering points raised by the different reviewers.\n\n\nOn related work in NLP with graphs:\n\nThank you for bringing up additional related work. The cited works handle quite different tasks, and so drawing a direct comparison to our work is hard. Marcheggiani et al. (2017) uses their model, with a single GCN propagation, for classification not sequence prediction, whereas Bastings et al. (2017) does sentence-to-sentence translation. Both employ purely syntactic graphs and thus lack the advantages that additional semantic information can provide. Our additional experiments 2 and 3 are designed to show the effect of this. The short paper of De Cao et al. (2018) uses a GCN over entities in multiple documents. Finally, we want to highlight that we propose to use graphs for longer documents, whereas the approaches above are primarily concerned with single sentences. On average the CNN/DM documents lead to graphs with 900 nodes and 2.5k edges.\n\nRegarding the question of SequentialGNN vs GCN, we believe that there are no substantial differences between the use of GCNs and GGNNs. The core contribution proposed in our paper is the idea to fuse information obtained from state-of-the-art sequence models with a form of structured reasoning that can integrate domain knowledge.\nWe will clarify the above in the related work section.\n\n\nOn the performance of SelfAtt vs. SelfAtt+GNN on MethodDoc C#:\n\nIn the paper, we discuss this result explicitly in the third paragraph of 4.1.4. The core reason for the decrease in ROUGE scores is that the SelfAtt+GNN model produces substantially longer outputs, which tends to impact ROUGE scores. This causes the substantial improvement in the BLEU score. We will extend the appendix to include examples of outputs of the SelfAtt/SelfAtt+GNN models that illustrate how the longer output improves the information content of the results. Overall, we want to note that ROUGE and BLEU are problematic measures for these tasks, but we are not aware of any other metrics that can be computed at scale.\n\n\nOn randomness of shown samples:\n\nThe sample in Figure 2 is one appearing in See et al. For Figure 1, we had to pick a sample that would fit within the given space, so it’s not randomly sampled. All other examples are randomly selected. \n", "Thanks for your time and helpful comments. As discussed in our reply to all reviews, we will run four additional experiments covering points raised by the different reviewers. However, while we believe that a human evaluation of generated summaries would be helpful, setting this up during the rebuttal period seems to be impossible. Do let us know if you want us to run more experiments / provide more results.", "Thank you for all your comments we respond to your comments individually. Below you can find a summary for all the reviewers.\n\nWe plan to run the following experiments:\n•\t[Experiment 1] BiLSTM on natural language inputs using Stanford CoreNLP information. For this, we will extend token embeddings by information from the CoreNLP parser, and introduce special tokens (“<REF1>”, …) to mark co-references.\n•\t[Experiment 2] BiLSTM+GNN on natural language inputs using only syntactic information. Concretely, each token will be represented by one node and we introduce one node per sentence. The only edges will be “NextToken” and “NextSentence”. This experiment tests the performance of our model using only syntactic information used by other models (e.g., hierarchical representations that split sentences).\n•\t[Experiment 3] BiLSTM+GNN on natural language input using syntactic and equality information. This is like experiment 2, but will also add edges between non-stopword nodes corresponding to tokens that have identical string representations when stemmed. \n•\t[Experiment 4] BilSTM+GNN -> LSTM+Pointer+Coverage. We will extend the full model by See et al. with additional graph information.\n\nPlease, do let us know if these sufficiently address the concerns you mention in your review or if you would like to see other experiments.\n\nWe also want to emphasize again the broad applicability of our method. While the natural language summarization task is clearly the most interesting one, we do want to remark that our very general model is able to compete with (and beat) specialized approaches on the source code tasks. We have spent very little optimizing our models to the different tasks, and strongly believe that intensive tuning of hyperparameters to each of these tasks could further improve our results.", "This paper presents a structural summarization model with a graph-based encoder extended from RNN. Experiments are conducted on three tasks, including generating names for methods, generating descriptions for a function, and generating text summaries for news articles. Experimental results show that the proposed usage of GNN can improve performance by the models without GNN. I think the method is reasonable and results are promising, but I'd like to see more focused evaluation on the semantics captured by the proposed model (compared to the models without GNN).\n\nHere are some questions and suggestions:\n\n- Overall, I think additional evaluation should be done to evaluate on the semantic understanding aspects of the methods. Concretely, the Graph-based encoder has access to semantic information, such as entities. In order to better understand how this helps with the overall improvement, the authors should consider automatic evaluation and human evaluation to measure its contribution. Also from fig. 3, we can see that all methods get the \"utf8 string\" part right, but it's hard to say the proposed method generates better description. \n\n- In the last table in Tab. 1, why the authors don't have results for adding GNN for the pointer-generator model with coverage?\n\n", "STRUCTURED NEURAL SUMMARIZATION\n\nSummary:\n\nThis work combines Graph Neural Networks with a sequential approach to abstractive summarization across both natural and programming language datasets. The extension of GNNs is simple, but effective across all datasets in comparison to external baselines for CNN/DailyMail, internal baselines for C#, and a combination of both for Java. The idea of applying a more structured approach to summarization is well motivated given that current summarization methods tend to lack the consistency that a structured approach can provide. The chosen examples (which I hope are randomly sampled; are they?) do seem to suggest the efficacy of this approach with that intuition.\n\nComments:\n\nShould probably cite CNN/DailyMail when it is first introduced as NLSummarization in Section 2 like you do the other datasets.\n\nCan you further elaborate on how your approach is similar to and differs from that in Marcheggiani et al 2017 on Graph CNNs for Semantic Role Labeling, Bastings et al 2017 on Graph Convolutional Encoders for Syntax-aware Machine Translation, and De Cao et al 2018? Why should one elect to go the direction of sequential GNNs over the GCNs of those other works, and how might you compare against them? I would like to see some kind of ablation analysis or direct comparison with similar methods if possible.\n\nWhy would GNNs hurt SelfAtt performance on MethodDoc C# SelfAtt+GNN / SelfAtt?\n\nWhy not add the coverage mechanism from See et al 2017 in order to demonstrate that the method does in fact surpass that prior work? I'm left wondering whether the proposed method's returns diminish once coverage is added." ]
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "HygQkgOY07", "iclr_2019_H1ersoRqtm", "BkehgP1qaQ", "BkehgP1qaQ", "iclr_2019_H1ersoRqtm", "rygnAxKlRQ", "SJlxsmaBpm", "B1g-t4TSTX", "Sygyxl0IhX", "HygI4PN52m", "S1xZrF4J6m", "iclr_2019_H1ersoRqtm", "iclr_2019_H1ersoRqtm", "iclr_2019_H1ersoRqtm" ]
iclr_2019_H1ewdiR5tQ
Graph Wavelet Neural Network
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform. Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost. Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution. The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed.
accepted-poster-papers
AR1 and AR3 have found this paper interesting in terms of replacing the spectral operations in GCN by wavelet operations. However, AR4 was more critical about the poor complexity of the proposed method compared to approximations in Hammond et al. AR4 was also right to find the proposed work similar to Chebyshev approximations in ChebNet and to highlight that the proposed approach is only marginally better than GCN. On balance, all reviewers find some merit in this work thus AC advocates an accept. The authors are asked to keep the contents of the final draft as agreed with AR4 (and other reviewers) during rebuttal without making any further theoretical changes/brushing over various new claims/ideas unsolicited by the reviewers (otherwise such changes would require passing the draft again through reviewers).
train
[ "rke3djBHJE", "B1lZ8H4Ch7", "HJxnVQ_Gk4", "H1lVuGuMk4", "ryg5qifzkV", "BkeGhSLykN", "r1gNgBRO3X", "SJgbLEF0RQ", "HyedjpicA7", "S1gYRpj9A7", "SJxDst0O0X", "BJg_QmL8Rm", "SJgloxU4CQ", "rygjzdBlA7", "H1gdhOBxAm", "BkeYvdreR7", "SJl0RLrgC7", "B1eMDDBe0X", "Bkeje8BeRm", "HkgPJe2k6Q", "SkgXWWFx9X", "Sye2KXfec7", "rJxXMC2ycm", "ByeP59nkcm", "HJlW-sd1cX", "ryeVsqwy9X", "BJlx-8Dk57", "BylQV6LAFX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public", "author", "author", "public", "public", "author", "public" ]
[ "Thank you for your comment! This paper seems interesting, i.e., any feature generated by the network is\napproximately invariant to permutations and stable to graph manipulations. We will pay attention to it and add it as our related work if necessary.", "This paper proposes to learn graph wavelet kernels through a neural network. This idea is interesting, even if it is not a real surprise, given the interesting features graph wavelets, and the explosion of proposals for graph neural networks. Yet, the paper is interesting, pretty complete, and the proposed algorithm is shown to be relatively effective. \n\nA few more detailed comments:\n\n- one for the key motivations the work, namely to avoid eigen-decompositions, is also solved in a different way by methods using Chebyshev approximations (e.g., Khasanova - ICML 2017). The introduction should be clarified accordingly. \n- retaining the flexibility of convolution kernel (on page 2): what do the authors mean here?\n- the graph wavelet nn in 2.4 is certainly an interesting combination of known elements - yet, it is not very clear how the network is trained/optimized at this stage. \n- the section 3 is a bit confusing: are the proposed elements, contributions of the paper, or only ideas with future work?\n- the idea of detaching feature transformation from convolution is interesting: it would be even better to quantify or discuss the penalty, if any, induced by this design choice. \n- the results are generally fine and convincing, even if they are not super-impressive. ' GWNN is comfortably ahead of Spectral CNN. ' is probably an over-statement however...\n- the discussion about interpretability is interesting, and very trendy. However, what the authors discuss is mere localisation and sparsity - this is one way to 'interpret' interpretability of features, but that discussion should be rephrased in a more mild sense then. \n\nGenerally, the ideas in this paper are interesting, even not surprising. The text should be clarified at places, and the paper looks a bit superficial on many aspects (see above). With good revision, it could lead to an interesting paper, and most likely to interesting discussions at ICLR.", "Thank you very much for your efforts on reviewing the paper and the response. Your review comments are valuable for improving and strengthening our paper. We are delightful to see that our revisions are acknowledged by you. \n\nRegards,\nThe authors\n", "We appreciate you so much for the pertinent comments. With the two-round rebuttals and revisions, we are inspired that we have established a mutual understanding with you about the paper.\n\nConsidering that you still have some concerns, we would like to highlight our major improvements after revisions:\n\n- Informative analysis about hyper-parameter sensitivity.\nFigure 5 is added as a thorough analysis about the effect of the hyper-parameter on the performance of the proposed method. As shown in Figure 5, the proposed method is not sensitive to hyper-parameters.\n\n- Clarification about computational complexity.\nPrompted by your suggestions, we clarified the computational complexity of the graph wavelets and graph convolution via graph wavelets. Indeed, with Hammond’s approximation (Appendix D), the computational complexity scales up to O(|E|) rather than being of quadratic complexity as concerned in your original review. \n\n- Good presentation quality. \nWe carefully revised the paper, clarifying several misleading statements in the original version. Moreover, we added detailed description about the implementation of the proposed method (Sec. 2.4). \n\nWe also agree that our method has several limitations, e.g., higher computational cost than GCN. However, these limitations don’t deteriorate our major contribution, i.e., we offer a localized graph convolution via replacing graph Fourier transform with graph wavelet transform, and such a work is useful, at least promising, to help us design appropriate graph convolution via finding good spectral basis with localization property and good interpretability.\n\nIn sum, we are delighted to see that the technical novelty and presentation quality are acknowledged. With the aforementioned improvements, we believe the current version of this paper deserves a higher score than the original version.\n", "I would like to thank the authors for the detailed reply and the revision efforts. After reading the rebuttals, I think the authors have established a mutual understanding with me of the strength and weakness of the paper. Our difference is more on \"whether the strength is great enough\" and \"whether these weaknesses are OK\" for giving the paper a pass.\n\nAfter careful consideration and re-scan through the paper, I still would like to keep my score. I understand that the authors have made revisions that should be acknowledged as these revisions have improved the presentation. However, these revisions and discussions have also confirmed the weakness as pointed out in my original review.\n\nLet me summarize my review as follows\nPro\n- technical novelty\n- presentation quality\nCon\n- computational and implementation difficulty\n- marginal improvement over GCN\n\nI would encourage the authors to develop this method into the next stage. Based on the novelty, I believe it deserves to be published in a good venue as ICLR after some further developments.", "This is an interesting study. In particular, training wavelets on graphs is very useful. It is interesting to compare with https://arxiv.org/abs/1804.00099. The latter work also uses Hammond's wavelets, but instead of training the wavelets it uses them to construct a scattering transform. It can be regarded as a graph network with fixed parameters. It is proved to have properties such as invariance to permutation and stability to signal and graph manipulation. ", "This is an empirical paper that proposes to design wavelets on graphs, that can be integrated to neural networks on graphs. It permits to reducing the number of parameters of the « convolution » and exploits the sparsity of sparse weighted graphs for computations. I think it’s an interesting work.\n\nThe perspective I enjoy in using wavelets is that they typically provide a good trade-off in localization in the spectral and graph domains. For instance, large eigenvalues of the laplacian could be potentially captured in a more stable way. This type of work might be a first step.\n\nPros :\n- good numerical results\n- nice incorporation of structure via wavelets\nCons :\n- Sometimes the paper is not really clear\n\nI have severals comments:\n\n1/ (1) « Some subsequent works devote to making spectral methods spectrum-free… avoiding high computational cost ». I think also those types of representations can be potentially unstable. That’s a second reason.\n\n2/ I’m not sure to understand the point (1) of the fourth paragraph of the introduction. (1.)« graph wavelet does not depend on the eigen decomposition of Laplacian matrix ». Does it mean numerically ? This sentence is not clear, because even numerically, it can be done in a dependent way to this matrix. However, if it is implied that the fastest algorithm can be obtained without eigen-decomposition of the Laplacian Matrix, in a cheap way, then I agree and a small rephrasing could be nice.\n\n3/ Please remove all the sentences that are supposed to be an introduction for a section, i.e. the sentences between 2 and 2.1 (« We use graph… ») and 3 and 3.1 (« In many research… »). They are poorly written and do not help the reader.\n\n5/ .(2.2) “However, such a polynomial approximation also limits the flexibility to define appropriate convolution on graph” I’m not sure to understand. In the paper you refer to, the set of filters span the polynomial of degree less than n, of a diagonalizable matrix of size nxn. Thus, lagrange polynomial basis could be used to interpolate any desired values? Does it signify learning non-diagonal(in the appropriate basis) matrix?\n\n6/ (2.3) $s$ how is this parameter chosen, crossvalidation? Is it adjusted such that the singular values of $\\psi_s$ have a certain decay? Why is $s$ constant across layers? How is the spectrum of psi_s? Is it well conditionned? A huge difference with (Hammond et al,2011) is that they use a collection of wavelets. Did you consider this kind of approach ? Is there a loss of information in some cases?(like if $\\psi_s$ has a fast decay)\n\n7/ (2.3) “The matrix $\\psi_s$ and $\\psi_s^{-1}$ are both sparse”. This is a critical affirmation which is not in general true. It is possibly sparse if the weighted laplacian graph is sparse as well, as explained by the remark after theorem 5.5 of page 16 of (Hammond et al, 2011). However, I do agree this typically happens in the application you present. \n\n8/ The (c) of Figure 1 is missing.(in my version at least)\n\n9/ In section 5.3, why not trying to compare the number of parameters with other papers? I think also more results, with maybe a bit higher performances, could be reported, such as: GraphSGAN, GAT. But that’s fine to me.\n\n10/ Appendix A, isn’t it a simple rephrasing of (Hammond et al, 2011)?\n\n11/ How do you compare in term of interpretability with [1]?\n\n12/ Just as a suggestion and/or a comment: it seems similar to approaches such as lifting schemes(which basically builds wavelets on graphs/manifolds), except that there is no learning involved. (e.g. [2]) I think there could be great connexions.\n\n\n[1] Relational inductive biases, deep learning, and graph networks, Battaglia et al 2011?\n[2] THE LIFTING SCHEME: A CONSTRUCTION OF SECOND GENERATION WAVELETS, Wim Sweldgens", "Thank you for your answer. I updated my score as promised.\n\nRegards,", "Q3: Experimentally, WAVELET has around 1% improvement over GCN's originally reported results, which is a fair baseline. Actually, GCN's performance can be a bit better than the reported results. For example, on the Pubmed dataset, its transductive classification accuracy can reach 79.2%, which is better than the proposed method. Moreover, as other reviewers/researchers have pointed out, the inductive results are missing.\n\nA3: Thank you for letting us to know that GCN can achieve better result (i.e., 79.2%) on Pubmed with well-tuned parameters. Indeed, GCN is also not the state-of-the-art methods for transductive node classification task. Several spatial methods like GAT achieve better results than GCN. We didn’t compare with GAT in this paper, because we focus on the line of spectral methods.\n\nIn other words, we don’t aim to design state-of-the-art method for graph-based semi-supervised learning task, e.g., node classification. We aim to demonstrate that graph wavelet transform is better than graph Fourier transform for designing spectral graph convolution. As shown in Table 3, the proposed GWNN method using graph wavelet transform remarkably (achieving 10% improvement on Cora and Citeseer, and 5% improvement on Pubmed) outperforms the Spectral CNN method using graph Fourier transform. \n\nFor the comparison with GCN, we want to clarify the connection and difference between our method and GCN. Both our method and GCN aim to improve spectral methods via design localized graph convolution. GCN, as a simplified version of ChebyNet, expresses the spectral graph convolution in spatial domain, acting as spatial-like method (Monti et al., 2017). Our method resorts to using graph wavelet as a new set of bases, directly designing localized spectral graph convolution. We compare with GCN on the transductive node classification task, just showing that these two methods have comparable results.\n\nFinally, spectral methods are inappropriate for the inductive classification task. Indeed, before GraphSAGE and GAT, almost all methods are not evaluated on inductive task. We follow the practice of spectral methods and evaluate our methods on the transductive classification task. \n\nQ4: In the sparsity comparison, one should also compare with the sparsity of the graph Laplacian matrix, so as to have an idea on the computational overhead of using the proposed approach.\n\nA4: We fully understand your concern about the computational cost of the proposed method. We acknowledge that the sparsity of the graph wavelet matrix depends on the sparsity of the Laplacian matrix and the hyper-parameter $s$. For clarity, we added the sparsity comparison between the two matrices in Appendix E.\n\nFor the computational overhead, it is not necessary to explicitly obtain the graph wavelet matrix for graph wavelet transform. Instead, following the Chebyshev approximation in Eq. (17) in Appendix D, the computational complexity of graph wavelet transform is O(m*|E|).", "Thank you for your efforts on reviewing our paper and the response. We are sorry that our hard-working response fails to meet your requirement. Here, we offer more response and revise the paper with some supplements. \n\nQ1: Hammond et al (2011)'s O(|E|) approximation of \\phi_s and \\phi_s^{-1} are still missing.\n\nA1: We acknowledge that it would be much self-contained for the paper to include the technical details of the approximation of \\phi_s and \\phi_s^{-1}. Considering that the approximation algorithm isn’t proposed by this paper, in the previous version, we only give the main results (e.g., complexity analysis) of the approximation with the reference to Hammond et al (2011). Prompted by your request, we included the technical details of the approximation of \\phi_s and \\phi_s^{-1} as Appendix D in the new version.\n\nQ2: It is true that the proposed GWNN has some computational issues and is surely more expensive than GCN (that's why you need Hammond et al's approximation) and has more hyper-parameters resulting from Hammond's polynomial approximation as well as the heat kernel. I am not convinced that the authors have fully acknowledged this point in their response.\n\nA2: Thank you for the pertinent comments. We also acknowledge that the proposed method still has some limitations, e.g., the computational issues and additional hyper-parameters as you pointed out, although we have attempted to combat these issues via approximation and detaching feature transformation from graph convolution. Yet, the major contribution, i.e., offering a localized graph convolution via replacing graph Fourier transform with graph wavelet transform, is not deteriorated by these issues. In particular, such a work is useful, at least promising, to help us design appropriate graph convolution via finding good spectral basis with localization property and good interpretability. This distinguishes our method from ChebyNet and GCN, which express the graph convolution defined via graph Fourier transform in vertex domain. In the revised version, we acknowledged these issues and added Appendix B and E to discuss the hyper-parameter and the computational cost.\n\n", "Thank you very much for the detailed clarification and revision on parameter sensitivity (the added Figure 5 is very informative).\n\nI will keep my original scores based on\n\n- Hammond et al (2011)'s O(|E|) approximation of \\phi_s and \\phi_s^{-1} are still missing.\n\n- It is true that the proposed GWNN has some computational issues and is surely more expensive than GCN (that's why you need Hammond et al's approximation) and has more hyperparameters resulting from Hammond's polynomial approximation as well as the heat kernel. I am not convinced that the authors have fully acknowledge this point in their response.\n\n- Experimentally, WAVELET has around 1% improvement over GCN's originally reported results, which is a fair baseline. Actually, GCN's performance can be a bit better than the reported results. For example, on the Pubmed dataset, its transductive classification accuracy can reach 79.2%, which is better than the proposed method. Moreover, as other reviewers/researchers have pointed out, the inductive results are missing.\n\n- In the sparsity comparison, one should also compare with the sparsity of the graph Laplacian matrix, so as to have an idea on the computational overhead of using the proposed approach.\n", "We appreciate you so much for the quick response, and we are delighted to see your approval of our revision to this paper.\n\nPrompted by your suggestion, we added Appendix C to compare the parameter complexity of our method with other methods. In this paper, as the second major contribution, we propose detaching feature transformation from graph convolution, reducing the parameter complexity remarkably, e.g., from $O(n*p*q)$ to $O(n+p*q)$ for Spectral CNN and our GWNN. Such a practice offers us an efficient implementation of GWNN, making it applicable to large scale real world networks. Moreover, with the reduction of parameter complexity, GWNN is particularly appropriate for the scenario of semi-supervised learning where labeled data is limited. For example, for the graph-based supervised learning task, i.e., node classification on Cora and Citeseer, GWNN with smaller parameter complexity achieves better performance than ChebyNet. \n\nThank you very much for all your comments, which are valuable for improving and strengthening our paper.", "Dear authors,\n\nThank you for your clarifications and revisions. Maybe one question related to Q9: do you have a rough idea of the number of parameters w.r.t. other methods? (I do not think I saw this in the revised version of the paper)\n\nI'd be happy to raise my score to 7 after this.\nBest,\n", "Thank you for the pertinent comments and accurately summarizing the major contributions of this paper. According to your constructive suggestions, we carefully revised our paper with much improvement. We now offer point-by-point response.\n\nQ1: one for the key motivations the work, namely to avoid eigen-decompositions, is also solved in a different way by methods using Chebyshev approximations (e.g., Khasanova - ICML 2017). The introduction should be clarified accordingly. \n\nA1: We revised the statements relevant to this point, and cited the paper (i.e., Khasanova, ICML 2017) in the introduction part. \n\nQ2: retaining the flexibility of convolution kernel (on page 2): what do the authors mean here?\n\nA2: We apologize for the misleading statement. What we mean is that the convolution kernel of ChebyNet is not flexible. Specifically, ChebyNet offers a $K$-order polynomial parameterization to graph convolution kernel. A smaller $K$ causes high approximation bias, while a larger $K$ results in non-localized convolution kernel. Therefore, ChebyNet has limited flexibility to define graph convolution kernel. In the revised version, we clarified this point in the last paragraph in Section 2.2.\n\nQ3: the graph wavelet nn in 2.4 is certainly an interesting combination of known elements - yet, it is not very clear how the network is trained/optimized at this stage. \n\nA3: Thank you for pointing out this issue. Prompted by your suggestion, we revised Section 2.4: (1) We offered detailed description about the architecture of graph wavelet neural network; (2) We added the loss function when training graph wavelet neural networks on semi-supervised node classification task.\n\nQ4: the section 3 is a bit confusing: are the proposed elements, contributions of the paper, or only ideas with future work?\n\nA4: We apologize for the confusing organization of sections. Indeed, Section 3 is the second contribution of this paper, i.e., reducing the parameter complexity by detaching feature transformation from graph convolution. It is particularly important for training graph wavelet neural networks. For clarity, we reorganized the sections of this paper: combing the original Section 3 into Section 2 as a subsection, i.e., Section 2.5.\n\nQ5: the idea of detaching feature transformation from convolution is interesting: it would be even better to quantify or discuss the penalty, if any, induced by this design choice. \n\nA5: We are delighted to see that this idea is identified and approved. Detaching feature transformation from convolution, we remarkably reduce the number of parameters. This practice is particularly important for scenarios where labeled data is limited, e.g., graph-based semi-supervised learning task considered in this paper.\n\nOne potential penalty of this practice is that the modeling capacity is reduced. To quantify the penalty, in Section 4.4, we compared the influence of detaching feature transformation from convolution to the performance of graph-based semi-supervised node classification. Results demonstrate that detaching feature transformation from convolution achieves comparable (sometimes better) performance, with the number of parameters remarkably reduced. \n\nQ6: the results are generally fine and convincing, even if they are not super-impressive. ' GWNN is comfortably ahead of Spectral CNN. ' is probably an over-statement.\n\nA6: For experimental evaluation, we compare the proposed GWNN with existing spectral methods, Spectral CNN, ChebyNet, GCN and some spatial methods like MoNet. The major contribution of this paper is to improve spectral methods, using graph wavelets rather than eigenvectors of Laplacian as bases. Thus, we focus on the comparison with spectral methods. By the sentence ' GWNN is comfortably ahead of Spectral CNN ', we mean GWNN using graph wavelet transform is ahead of the Spectral CNN using Fourier transform, i.e., GWNN (the last row in Table 3) and Spectral CNN (the ninth row in Table 3). Indeed, GWNN is better than Spectral CNN by 10% improvement on Cora and Citeseer and 5% improvement on Pubmed.\n\nQ7: the discussion about interpretability is interesting, and very trendy. However, what the authors discuss is mere localisation and sparsity - this is one way to 'interpret' interpretability of features, but that discussion should be rephrased in a more mild sense then. \n\nA7: Thank you for the inspiring comments. We revised the discussion about interpretability (Section 4.7), trying to ease the understanding of the interpretability of graph wavelets and graph convolution via graph wavelet transform.\n", "Thank you for your positive comments and the accurate summary of our contributions. Prompted by your constructive suggestions, we carefully revised the paper and improved the clarity of the presentation. Next we offer point-by-point response. \n\nQ1: (1) « Some subsequent works devote to making spectral methods spectrum-free… avoiding high computational cost ». I think also those types of representations can be potentially unstable. That’s a second reason.\n\nA1: We revised this sentence in the new version, highlighting that the chief goal of spectrum-free methods is to achieve locality in spatial domain.\n\nQ2: I’m not sure to understand the point (1) of the fourth paragraph of the introduction. (1.)« graph wavelet does not depend on the eigen decomposition of Laplacian matrix ». Does it mean numerically ? This sentence is not clear, because even numerically, it can be done in a dependent way to this matrix. However, if it is implied that the fastest algorithm can be obtained without eigen-decomposition of the Laplacian Matrix, in a cheap way, then I agree and a small rephrasing could be nice.\n\nA2: We apologize for not clarifying this point at the outset. To be sure, as you pointed out, graph wavelets are “numerically”, not intrinsically, independent on eigen-decomposition of Lapacian matrix. For clarity, we replaced the sentence “graph wavelet does not depend on the eigen-decompostition of Laplacian matrix” with a new sentence “graph wavelets can be obtained via a fast algorithm without requiring the eigen-decomposition of Laplacian matrix”.\n\nQ3: Please remove all the sentences that are supposed to be an introduction for a section, i.e. the sentences between 2 and 2.1 (« We use graph… ») and 3 and 3.1 (« In many research… »). They are poorly written and do not help the reader.\n\nA3: Thank you for the suggestions. We removed these sentences in the revised version.\n\nQ5: .(2.2) “However, such a polynomial approximation also limits the flexibility to define appropriate convolution on graph” I’m not sure to understand. In the paper you refer to, the set of filters span the polynomial of degree less than n, of a diagonalizable matrix of size nxn. Thus, lagrange polynomial basis could be used to interpolate any desired values? Does it signify learning non-diagonal(in the appropriate basis) matrix?\n\nA5: We apologize for the misleading statement. Indeed, a polynomial parameterization with order $n$ is capable to represent any diagonal matrix, i.e., graph convolution kernel in spectral domain. What we mean by “the inflexibility of such polynomial parameterization” is: a smaller order causes approximation bias and a larger order results in non-localized convolution kernel. In the revised version, we clarified this point in the last paragraph in Section 2.2.\n\nQ6: (2.3) $s$ how is this parameter chosen, crossvalidation? Is it adjusted such that the singular values of $\\psi_s$ have a certain decay? Why is $s$ constant across layers? How is the spectrum of psi_s? Is it well conditionned? A huge difference with (Hammond et al,2011) is that they use a collection of wavelets. Did you consider this kind of approach ? Is there a loss of information in some cases?(like if $\\psi_s$ has a fast decay)\n\nA6: The parameter $s$ is a hyper-parameter in our method, and its value is chosen using cross-validation. We use a constant $s$ across layers in this paper. We agree with you that it is a really interesting idea to use different $s$ across layers, i.e., using wavelets with varying locality across layers. Different form previous methods that use a collection of wavelets, we use the wavelets associated with a constant scaling parameter $s$. We agree that it is promising to use a collection of wavelets and leave it as future work. In our paper, the larger the $s$ is, the faster decay the $\\psi_s$ has, moreover, the range of neighboring node is large, which may result in the loss of localization. Finally, in the revised version, we included detailed discussion in Appendix B to demonstrate the influence of the value of $s$ on the performance of our method.\n", "Q7: (2.3) “The matrix $\\psi_s$ and $\\psi_s^{-1}$ are both sparse”. This is a critical affirmation which is not in general true. It is possibly sparse if the weighted laplacian graph is sparse as well, as explained by the remark after theorem 5.5 of page 16 of (Hammond et al, 2011). However, I do agree this typically happens in the application you present. \n\nA7: Thank you for drawing our attention to this point. Indeed, the sparsity of the matrix $\\psi_s$ and $\\psi_s^{-1}$ is related to the sparsity of the Laplacian matrix. Generally, real world networks are sparse with the number of edges much less than the square of the number of nodes. In these cases, the matrix $\\psi_s$ and $\\psi_s^{-1}$ are both sparse. Prompted by your comments, we revised the statement to offer an accurate claim. \n\nQ8: The (c) of Figure 1 is missing.(in my version at least)\n\nA8: We corrected the caption of Figure 1 in the revised version.\n\nQ9: In section 5.3, why not trying to compare the number of parameters with other papers? I think also more results, with maybe a bit higher performances, could be reported, such as: GraphSGAN, GAT. But that’s fine to me.\n\nA9: We agree that it is always better to compare with more methods. Yet, the major contribution is about improving spectral methods by using graph wavelets instead of eigenvectors as a set of bases. Therefore, we focus on the comparison with spectral methods. We particularly appreciate your understanding. \n\nQ10: Appendix A, isn’t it a simple rephrasing of (Hammond et al, 2011)?\n\nA10: Hammond et al. (2011) described the locality of graph wavelets. Instead, in Appendix A, we demonstrate that the graph convolution via graph wavelets is localized, i.e., the nodes used to update target node are its neighboring nodes. We clarified this point in the revised version.\n\nQ11: How do you compare in term of interpretability with [1]?\n\nA11: We agree that the meaning of “interpretability” is diverse in the literature. In this paper, we use “interpretability” to offer some intuitive understanding for the wavelet transform, compared with the Fourier transform. In [1], the proposed graph neural network, defining a flexible network via graph block, offers the interpretability as the correlation among nodes in spatial domain.\n\nQ12: Just as a suggestion and/or a comment: it seems similar to approaches such as lifting schemes (which basically builds wavelets on graphs/manifolds), except that there is no learning involved. (e.g. [2]) I think there could be great connexions.\n\nA12: The paper you mentioned presents a simple construction of wavelets that could be adapted to graphs without learning process. We cited this paper as related work in the revised version.\n", "Q2: Experimentally, the improvement over GCN is marginal. Taking into account the implementation difficulty and complexity, the proposed method is more like a proof-of-concept instead of being of practical use. The authors are suggested to make the empirical study more comprehensive, by including node classification in an inductive setting, and/or including link prediction experiments. \n\nA2: In this paper, we focus on spectral methods for graph convolution. The standard spectral method, i.e., the Spectral CNN, is not localized, limiting its performance. To achieve localization, GCN and ChebyNet are proposed to express the graph convolution defined via graph Fourier transform in vertex domain. Here, we propose a new formulation of graph convolution, i.e., defining graph convolution via graph wavelet transform, achieving localization in both spectral and spatial domain. \n\nExperimental results demonstrate that the proposed GWNN method using graph wavelet transform remarkably (achieving 10% improvement on Cora and Citeseer, and 5% improvement on Pubmed [Table 3]) outperforms the Spectral CNN method using graph Fourier transform. Meanwhile, GWNN also outperforms GCN and ChebyNet. \n\nWe fully agree that it is always better to validate a new method on more scenarios and tasks. Here, following the common practice to evaluate spectral methods (e.g., GCN and ChebyNet) for graph CNN, we validate our method on the widely-used playground, i.e., node classification task on three benchmark datasets. \n\nQ3: The hyper-parameters (scale of the heat kernel) and t (threshold to zero the \\phi_s matrix) has to be tuned for each data set. This is not the ideal case because these parameters may not be easy to tune for real data sets, making the method difficult to use. The authors should at least give some recipes on how to tune these parameters.\n\nA3: The hyper-parameter $s$ is used to modulate the range of neighborhood and the smoothness of graph wavelets. The hyper-parameter $t$ is used only for computational consideration. We use cross-validation to determine the value of the hyper-parameter $s$ and $t$, following the common practice of machine learning community. To offer more intuition, we add the analysis about the impact of hyper-parameters on the accuracy of graph-based semi-supervised learning in Appendix B, and demonstrate our recipes to tune hyper-parameters.\n", "Thank you for the positive comments. We are delighted to see that the technical novelty of the proposed method is approved. Inspired by the pertinent comments, we carefully revised the paper, clarifying some misleading statements and highlighting the main contributions. \n\nQ1: By the vanilla implementation without any approximations, the multiplication with \\phi_s and \\phi_s^{-1} has quadratic complexity, and to compute these two matrices is of cubic complexity. This is not acceptable for large graphs. In section 3.2, the authors mentioned that Hammond et al (2011) have an approximation of \\phi_s and \\phi_s^{-1} with the complexity of O(|E|). However, this important technical detail is disappointingly missing in the main text. I have further concerns about whether the good properties listed in section 2.3 are preserved by this approximation, which is not discussed in the paper. \n\nEven with such an approximation, the matrix multiplication still has quadratic complexity. To reduce this complexity needs some non-trivial developments. I suggest the author(s) dive into the expression of \\phi_s and write the convolution in the vertex domain and seek possibilities for a linear approximation.\n\nA1: We apologize for not clarifying these issues at the outset. We appreciate you for pointing out these issues. Prompted by your comments, we carefully revised Sec 2.3 to ease the understanding of the property of graph wavelets. Here, we clarify three main points: \n\n(1) The complexity of the multiplication with \\phi_s and \\phi_s^{-1} is not quadratic. Indeed, the complexity depends on the sparsity of the two matrices \\phi_s and \\phi_s^{-1}. For real world networks, the number of edges is much less than the square of the number of nodes. As a result, as proved in Lemma 5.5 in Hammond et al (2011), both \\phi_s and \\phi_s^{-1} are sparse. Table 4 also shows the empirical results about the sparsity of \\phi_s^{-1}: on the dataset Cora, more than 97% elements in \\phi_s^{-1} are zero.\n\n(2) The computation of the two matrices is completed using a polynomial approximation, Sec. 6 in Hammond et al (2011). Such an approximation is a polynomial function of Laplacian matrix, only involving the matrix-vector multiplication. Since the sparsity of the Laplacian matrix scales up to O(|E|), the computation of \\phi_s and \\phi_s^{-1} is of the complexity O(K*|E|), where |E| is the number of edges and K is the order of polynomial approximation. This point is clarified in Sec. 2.3.\n\n(3) All the properties listed in Section 2.3 are satisfied when \\phi_s and \\phi_s^{-1} are computed in an approximated way: (a) The first property (i.e., high efficiency) is naturally satisfied; (b) The second property (i.e., high sparseness of \\phi_s and \\phi_s^{-1}) is satisfied, since the approximated \\phi_s and \\phi_s^{-1} are always more sparse than the original ones; (c) The third property (i.e., localization property) is satisfied. As shown in Appendix A, the localization property is a result of the sparseness of graph wavelets; (d) The forth property (i.e., flexible neighborhood) is satisfied. Indeed, the flexibility is achieved by adjusting the value of the scaling parameter $s$.\n\nIn addition, we agree it is a really interesting idea to write the convolution in the vertex domain and seek possibilities for a linear approximation. Indeed, we are attempting to use heat kernel to achieve this purpose. Here, we leave it as future work. \n", "We thank the reviewers and other researchers for their comprehensive and thoughtful comments. According to these comments, we carefully revised our paper, improving both the quality and clarity of our paper. We hope that reviewers will find the revised paper suitable for acceptance.\n\nA summary of the changes made to our paper is listed as follows:\n\n-\tIn Section 2.2, we added a detailed discussion about why ChebyNet has limited flexibility to define convolution kernel. This revision clarifies the confusion about the “the flexibility of convolution kernel” (Reviewer 1 and Reviewer 3). \n\n-\tIn Section 2.4, we added the architecture of the proposed graph wavelet neural network and the loss function when training graph wavelet neural networks on semi-supervised node classification task.\n\n-\tWe combined Section 3 in the original version into Section 2.5 in the revised version (Reviewer 3).\n\n-\tWe added an experiment (Appendix B) to show the impact of hyper-parameters on the accuracy of node classification, and demonstrate our recipes to tune hyper-parameters (Reviewer 1 and Reviewer 4).\n\n-\tWe rephrased some misleading statements to improve the clarity of the paper.\n\n-\tWe added some references, prompted by the suggestions from reviewers and public comments.\n\nWe submitted a revised version including the aforementioned revisions. Thank you for all the efforts that help us improve the paper.", "This paper proposed a new formulation of graph convolution, that is based on\ngraph wavelet transform. The convolution network is expressed in eq.(4)\nwith \\psi_s given by eq.(1). This new formulation is exciting in that\nit has numerous advantages comparing to spectral graph convolutions\nsuch as the sparsity of \\psi_s (see the items listed in section 2.3).\nThe method showed marginal improvement (<1%) on node classification\ntasks of citation networks.\n\nOverall, I am convinced with the technical novelty and that the method\ncan be promising. However, in its current form, there are several\nmajor weaknesses, hinting that this work needs further developments before\npublication.\n\n1. By the vanilla implementation without any approximations, the multiplication\nwith \\phi_s and \\phi_s^{-1} has quadratic complexity, and to compute these two\nmatrices is of cubic complexity. \nThis is not acceptable for large graphs. In section 3.2, the authors\nmentioned that Hammond et al (2011) have an approximation of\n\\phi_s and \\phi_s^{-1} with the complexity of O(|E|).\nHowever, this important technical detail is disappointingly missing\nin the main text. I have further concerns about whether the good\nproperties listed in section 2.3 are preserved by this approximation,\nwhich is not discussed in the paper.\n\nEven with such an approximation, the matrix multiplication still\nhas quadratic complexity. To reduce this complexity needs some non-trivial\ndevelopments. I suggest the author(s) dive into the expression of\n\\phi_s and write the convolution in the vertex domain and seek possibilities\nfor a linear approximation.\n\n2. Experimentally, the improvement over GCN is marginal. Taking into account\nthe implementation difficulty and complexity, the\nproposed method is more like a proof-of-concept instead of\nbeing of practical use. The authors are suggested to make the\nempirical study more comprehensive, by including node classification\nin an inductive setting, and/or including link prediction experiments. \n\n3. The hyper-parameters (scale of the heat kernel) and t (threshold\nto zero the \\phi_s matrix) has to be tuned for each data set.\nThis is not the ideal case because these parameters may not be easy\nto tune for real data sets, making the method difficult to use.\nThe authors should at least give some recipes on how to tune these parameters.\n", "Thank you for the comprehensive and pertinent comments. We also appreciate you for listing several recent advances, and we will include these works in related works or as baseline methods. \n \nFirst, for the difference between “spectral” and “spatial” methods, we basically agree with you that their distinction is somewhat artificial. To our understanding, the major distinction lies in the way the convolution is defined rather than whether Fourier transform or wavelet transform, i.e., transforming graph signal from vertex domain to spectral domain, is explicitly leveraged. Indeed, many spectral methods are spectrum-free, e.g., Chebyshev networks and GCN. In this paper, we separately describe the two types of methods just to put our graph wavelet neural network in an appropriate literature, i.e., moving in line with spectral methods. \n \nWe also fully agree that it is a promising research direction to design anisotropic spectral filters on graphs, following the successful practice on manifolds. \n \nSecond, we are glad to see that you have a local spectral CNN approach based on the graph Windowed Fourier Transform. We believe that this is an interesting work, given that Windowed Fourier transform is localized in vertex domain, compared with Fourier transform. This is also why we want to use wavelet transform to replace Fourier transform.\n\nThird, a new benchmark dataset is really an interesting topic and urgent demand for the GNN research community, through our present work is still evaluated on the three traditional benchmark datasets, i.e., Cora, Citeseer and Pubmed. To our understanding, the current benchmark works well to distinguish good methods/models from bad ones, yet they are not sufficient to distinguish better methods/models from good ones. We also look forward to seeing the emergence of new benchmark. The other issue we want to spell out is that a new benchmark dataset is highly dependent on new task or scenario, e.g., the current benchmark is basically designed for graph semi-supervised classification task and the underlying assumption is the graph smoothness, i.e., connected nodes are likely to share the same label. We believe that a new task or a more heterogeneous scenario will be valuable to promote the development of this research community. \n \nIn sum, thank you for the comments and inspiring discussion again. ", "I think this is an interesting work, and I would like to follow up on the previous exchange of comments. \n\nFirst, I would like to note that the distinction between \"spectral\" and \"spatial\" approaches is rather artificial, as at the end methods like Chebyshev networks or the present paper do not do explicit Fourier transform and boil down to applying local spatial operators (e.g. Laplacian and its powers). \n\nHowever, if by \"spectral\" methods one refer to those based on Laplacian-type operators, I would suggest comparing to the following baselines: [1] uses rational functions instead of polynomials (it achieves 81.9% on the standard Cora split). One of the key deficiencies of the Laplacian is that it is locally permutation-invariant (on a plane, this is manifested as rotation symmetry). Thus, any Laplacian-based filters are isotropic. It is possible to create anisotropic spectral filters on manifolds, but general graphs are more complicated. I am only aware of [2] that uses graph motifs (motivated by the work of Benson et al.) to create an analogy of anisotropic diffusion on graphs. This is especially useful for treating directed graphs (as a matter of fact, the original Cora citation graph is directed), which are somewhat problematic to treat with spectral techniques due to the asymmetry of the Laplacian matrix. \n\nAmong \"spatial\" methods, besides GAT you might want to check [3], which alternates convolutions on vertices and edges (using the formalism of line graphs), generalizing the graph attention mechanism proposed in GAT and leading to better performance, though not very significantly (achieving 83.3% on Cora and 72.6% on Pubmed). Also, [4] is an interesting approach based on graph shift operators (in general, the works of Jose Moura on graph signal processing are unjustly not cited in this community). \n\n\nSecond, in the context of shape analysis we developed a local spectral CNN approach based on the graph Windowed Fourier Transform [5], which bears some resemblance to your paper (though never tested it on general graphs).\n\n\nThird, I would add my 50 cents to Marc's comment regarding benchmarks: I also think Cora and Pubmed are \"too easy\". Even worse, they might provide a misleading idea about how different methods perform on \"real data\". From my experience, most of GCNN approaches work well on graphs with underlying assumption of homophilic relations (\"positive connections\"). This in particular true for geometric data sampled from some high-dimensional manifolds. Laplacian-based methods are very appropriate in these settings. It seems that Cora/Pubmed citation networks fall into this category. However, more challenging datasets (such as interactomes in system biology) might have more complicated heterogenous relations between nodes on which algorithms working well on Cora perform poorly. What is also blatantly missing are interesting datasets with rich edge features. It seems to be sufficient critical mass of works on graph deep learning to motivate the creation of more challenging benchmarks. \n\n\n1. CayleyNets: Graph convolutional neural networks with complex rational spectral filters\", arXiv:1705.07664\n\n2. MotifNet: a motif-based Graph Convolutional Network for directed graphs\", arXiv:1802.01572\n\n3. Dual-Primal Graph Convolutional Networks, arXiv:1806.00770.\n\n4. ON GRAPH CONVOLUTION FOR GRAPH CNNS, DSW 2018\n \n5. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks\", Computer Graphics Forum 34(5):13-23, 2015\n\n6. Learning shape correspondence with anisotropic convolutional neural networks, NIPS 2016.\n", "Thank you for your comment! It’s interesting to have a discussion on the datasets. \n\nUp to now, GNN has got a lot of attention, and many researchers are making contributions to this hotspot. We agree that harder benchmarks could benefit the development of GNN. However, it’s still a long way to give an appropriate definition of the relation between nodes no matter what the benchmarks. Before the emergence of a more convincing dataset, we have to validate the proposed models on these widely adopted ones. And experiments on the three datasets help us to analyze the relative merits of different models to some degree.\n\nIn our opinion, conducting experiments is just a way to validate the merits and effectiveness of a new proposed model. And it's more valuable to have a theoretical exploration in addition to achieving better performance on the datasets. These related works mentioned in the paper provide different ideas and promote the development of GNN. As mentioned in the paper, one of our main contributions is to point out the value of wavelet and localized basis.\n\nI agree that we need to pay attention to harder benchmarks. Our group also focuses on social network analysis. And we are trying to generalize our method to social networks, and validate the effectiveness of different models with related benchmarks. Also, we will pay attention to the datasets and works that you mentioned.\n\nThanks again for your comment!\n", "Thanks for your comment and approval of our baseline!\nGAT has become a popular work, and there are many variants of GAT. Although our work is a spectral method, we still get inspiration from the attention mechanism. Also, we will pay attention to GraphSGAN. \nThank you very much!", "Disclosure: I'm one of the authors on the GGNN/GPNN papers, and I don't want to comment on the merits of the presented approach here, but on problems with graph learning experiments overall.\n\nHaving done some experiments with GNN variants on the Cora/Citeseer/Pubmed, I fear they are not useful indicators of the usefulness of new ideas anymore. The results reported by recent work are all very near to each other, and differences seem to be mostly noise. For example, GPNN reports 79.3% acc on Pubmed and 81.8% on Cora, but only 69.7% on Citeseer. These GPNN results are averaged over a number of runs with different seeds, but the variance is substantial compared to the differences between published models. Concretely, I believe that a few rounds of optimizing the 'SEED' hyperparameter would be most likely sufficient to get any of the models published in the last year to have the best results.\n\nWhat I'm saying here has nothing to do with the merits of any of the papers on graphs submitted to ICLR'19, and I'm not calling for error bars on the results here. Instead, I think we had enough progress that we /all/ need to move on to harder benchmarks, though I'm not sure what these should be. The chemical properties from the Gilmer et al. paper may be a good fit (and the data is easily accessible and reference implementations exist), but they are all very small graphs. Our graph data from the Allamanis et al. ICLR'18 paper has been released and contains larger graphs, but the dataset is painfully large to work with and there's many non-graph things to play with. ", "Disclosure: I'm the author of GATs, and I'm not the original comment poster from above.\n\nBecause the purpose of this paper seems to be to propose a fully-spectral method, I agree that the main comparison should be with spectral methods, or at least methods that depend on observing some aspects of the Laplace matrix (i.e. Chebyshev networks, GCNs and the original formulation of MoNet).\n\nTo the best of my knowledge, the state-of-the-art (at least on Citeseer, which seems to be the most flexible dataset to improve on out of the three) is currently held by GraphSGAN (Ding et al., CIKM 2018), and it might be useful to report this result instead (but this would only be for contextual purposes, in my opinion).", "Thank you for your comment! I would like to explain it from two perspectives:\nMotivation: Due to the inspiration of Spectral CNN (Bruna et.al., 2014), many works attempt to implement convolutional neural networks on graphs. However, as far as I know, there are few spectral methods leveraging the convolution theorem to design convolution operator after GCN (Kipf&Welling, 2016). It's true that GAT is more flexible than spectral methods, it applies attention mechanism to calculate relations between nodes. However, the shared parameters in GAT are linear transformation matrix W when calculating attention instead of convolution kernels (filters). We hope to propose a better spectral method based on the convolutional theorem. GWNN is our first attempt which holds the flexibility of kernel and locality in vertex domain. \nResults: We think we achieve comparable results on two datasets. The accuracy of GAT on Cora is 83.0%, and that of our work is 82.8%. On Pubmed, GAT scores 79.0% while our work scores 79.1%. It's true that GAT performs better than ours on Citeseer. We think it could be caused by the fact that the network of Citeseer is sparser than the other two datasets, which has 3327 nodes and 4732 edges. And due to the way of GAT which calculates the relations between nodes based on representation in hidden layer, adjacency matrix is only regarded as a mask. Instead, we leverage the wavelet which depends on the structure of network. Thus, our model doesn't achieve a better results on Citeseer.\nThanks again for your comment! We will include GAT as our baseline and give a discussion in a revised version of our paper.\n", "You should include the experimental results of GAT which are better than yours and then explain why your results are lower." ]
[ -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BkeGhSLykN", "iclr_2019_H1ewdiR5tQ", "SJgbLEF0RQ", "ryg5qifzkV", "HyedjpicA7", "iclr_2019_H1ewdiR5tQ", "iclr_2019_H1ewdiR5tQ", "BJg_QmL8Rm", "SJxDst0O0X", "SJxDst0O0X", "SJl0RLrgC7", "SJgloxU4CQ", "BkeYvdreR7", "B1lZ8H4Ch7", "r1gNgBRO3X", "r1gNgBRO3X", "HkgPJe2k6Q", "HkgPJe2k6Q", "iclr_2019_H1ewdiR5tQ", "iclr_2019_H1ewdiR5tQ", "Sye2KXfec7", "iclr_2019_H1ewdiR5tQ", "HJlW-sd1cX", "ryeVsqwy9X", "ryeVsqwy9X", "BJlx-8Dk57", "BylQV6LAFX", "iclr_2019_H1ewdiR5tQ" ]
iclr_2019_H1fU8iAqKX
A rotation-equivariant convolutional neural network model of primary visual cortex
Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity. Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex.
accepted-poster-papers
The overall consensus after an extended discussion of the paper is that this work should be accepted to ICLR. The back-and-forth between reviewers and authors was very productive, and resulted in substantial clarification of the work, and modification (trending positive) of the reviewer scores.
test
[ "HklYxH1ShX", "B1xewvj5AQ", "rylC6Sjq0Q", "SJewNg6FC7", "rygl-JTFCQ", "SkeVCOsFCX", "BylsquoYCQ", "SklJEXPI0m", "BJgDxJhrC7", "ByeMgTF40X", "H1egkTYEA7", "SJlinIY4AX", "S1epMlHQRX", "HkxrJkvlAX", "HkesEK4KaX", "Bkl4DDIda7", "rJgB5uLO6Q", "SkeaZwLuaQ", "H1gsFfq1Tm", "Skx-rp2qnQ", "SylAnad4om", "HkerUYEy67", "rJgAi-CC3Q" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper applies a rotation-equivariant convolutional neural network model to a dataset of neural responses from mouse primary visual cortex. This submission follows a series of recent papers using deep convolutional neural networks to model visual responses, either in the retina (Batty et al., 2016; McIntosh et al., 2016) or V1 (Cadena et al., 2017; Kindel et al., 2017; Klindt et al., 2017). The authors show that adding rotation equivariance improves the explanatory power of the model compared to non-rotation-equivariant models with similar numbers of parameters, but the performance is not better than other CNN-based models (e.g. Klindt et al., 2017). The main potential contributions of the paper are therefore the neuroscientific insights obtained from the model. However, I have concerns about the presented data and the validity of rotation equivariance in modeling visual responses in general (below). Together with the fact that the model does not provide better explanatory power than other models, I cannot recommend acceptance. I am open to discussions with the authors, but do not anticipate a major change in the rating.\n\nUpdate after revisions: The authors performed extensive work to address my concerns. This showed that some concerns (RF appearance) were valid, and the authors removed them from the final manuscript. I raised my score accordingly.\n\n1. As noted by the authors, the finding that “Feature weights are sparse” (page 6) could be due to the sparsity-inducing L1 penalty. The fact that a model without L1 penalty performs worse does not mean that there is sparsity in the underlying data. For example, the unregularized model could be overfitting. A more careful model selection analysis is necessary to show that the data is better fit by a sparse than a dense model. \n\n2. The finding that there are center-surround or asymmetric (non-gabor) RFs in mouse V1 is not novel and not specific to this model (e.g. Antolik et al., 2016). \n\n3. Many of the receptive fields in Figure 6 look pathological (overfitted?) compared to typical V1 receptive fields in the literature. I understand that sensitivity to previously undetected RF features is a goal of the present work. However, given how unusual the RFs look, more controls are necessary to ensure they are not an artefact of the method, e.g. the activation maximization approach with gradient preconditioning, the sparsity constraints, or overfitting. Perhaps a comparison of RFs learned on two disjoint subsets of the training set would help to determine which features are reproducible.\n\n4. Should orientation be treated as a nuisance variable? Natural image statistics are not rotation-invariant. In the visual system, especially in mice, it is not clear whether orientation is completely disentangled from other RF properties. The orientation space is not uniformly covered, and some directions have special meaning (e.g. cardinal directions), such that it might be invalid to assume that the visual system is equivariant to rotation. (The same concern applies to the translation equivariance assumed when modeling visual RFs with standard CNNs.) Of course, there is a tradeoff between model expressiveness and the need to make assumptions to fit the model with realistic amounts of data. However, this concern should at least be discussed.\n\n5. Some more details about the neural recordings would be good. What calcium indicator? How was the recording targeted to V1? Perhaps some example traces.", "We just uploaded a final revision addressing all reviewers' comments and ongoing discussions. Here are the main changes since the beginning of the rebuttal period:\n\n- Removed language of \"cell types\" when referring to our analyses (R3).\n- Performed proper control for claim that readout weights are sparse (R1).\n- Replaced Fig. 6 (preferred stimuli) by linearized/gradient receptive fields to address R1's concern that preferred stimuli may be overfit.\n\nWe would like to thank all reviewers for their very constructive feedback and great responsiveness during the discussion period. It has really improved the paper. We would also like to ask all reviewers to make sure they review their scores and make sure they reflect the current version of the paper.", "Thanks for the quick feedback! We believe you have a point and therefore decided to remove the activity maximization figure from the paper and show only the linearized/gradient receptive fields.\n\nAlthough we do not entirely share your negative view on this analysis (we took great care not to overfit, using cross-validated regularization and early stopping), we do realize that burden of proof is on us and that at this point we do not have direct experimental evidence.\n\nThanks again for the responsiveness and constructive feedback! It really improved the paper. We are now uploading a final revision and would appreciate if you update your score to reflect our discussions and the final version of the paper.", "I am now more confident that there is some sparse structure in the RF weights. Regarding the RF structure, see my other comment.", "Thanks for performing the additional analyses. I appreciate your work doing these additional controls. Unfortunately, I think the new controls (Figure A.1) are textbook evidence for overfitting and confirm my concerns.\n\nThe gradient receptive fields (row 1) seem to suggest that the additional structure in the RFs is not due to the rotation equivariance, but due to the activity maximization procedure. The split dataset experiments (row 4 and 5) strongly suggests that the activity maximization magnifies data-dependent differences in the models (i.e. magnifies over-fit features in the RF structures). I found hardly any matching structure in the two 50%-models, except for the classical RF structure that was already present in the Gradient RFs. Because this is likely due to differences in the training data, not in the weight initialization, it is unsurprising that models with different initializations are more similar.\n\nEven though models fit on 50% of the data perform worse, the RFs they produce are qualitatively similar to the \"All data\" model, suggesting that the 50%-models behave similarly. To me, this suggest that the activity maximization method for determining RFs is not robust enough to draw conclusions about RF structure from it at this point.\n\nIf I were you, I would not trust this structure to be biologically meaningful. To show that the additional structure in the activity-maximization RFs is biologically meaningful despite the lack of consistency between datasets, it would be necessary to actually show these stimuli to an animal and test if they elicit stronger responses.\n\nMy suggestion for the paper would be to remove the activity maximization and instead use the Gradient RFs. The contributions are then the improved model fit quality, and the result that L1-regularization improves performance over L2-regularization.", "We performed the analysis and added Figure A.1 to the most recent revision. Detailed response see https://openreview.net/forum?id=H1fU8iAqKX&noteId=BJgDxJhrC7&noteId=BylsquoYCQ", "We performed the control analysis you suggested, splitting the data into two halves, fitting models on each half and then computing the preferred stimuli (see new Fig. A.1). In addition, we also computed the preferred stimuli for another model, which used a different initialization and different hyperparameters, but all the data.\n\nThe bottom line of the analysis is that the main finding holds robustly: Preferred stimuli are much more global than linearized (gradient) receptive fields. At the same time, it also shows that these preferred stimuli do show quite some idiosyncrasies – as you expected.\n\nHowever, unfortunately the analysis using the two halves of the data is not as telling as one would hope. The main issue is that with only half of the data available, the model fits are not as good and the preferred stimuli are not very reproducible (they often look really poor, indicating a poor fit). Using two different initializations, in contrast, produces more reproducible results. \n\nOverall, we agree that one should take these images with a grain of salt, but we do think they reveal an interesting and robust difference between the linearized RFs and the nonlinear function of the neurons. Due to time constraints we have not had the time to add a fully nuanced discussion of these issues, but will certainly make sure to not overstate anything in the final version. Any suggestions where we should improve the wording/claims would be very welcome.", "Looks good!\n\nI look forward to the result of the experiment proposed by Rev. 1, where you split the data in two halves, to see whether the preferred stimuli shown in fig.6 are robust to different fits of the model.", "We just uploaded a revision with a new Figure showing the gradient RFs (Fig. 7) of the same neurons as in Fig. 6.\n\nThe gradient RFs mostly look like Gabor filters and are much more localized than the preferred stimuli. This result is indeed reassuring, as it shows that with similar visualization methods, we obtain similar results as previous work. Thanks for the suggestion!", "Great we got the performance comparison sorted out!\n\nThanks for your suggestions. We should have thought about using L2 for the feature weights and did this control now. Indeed, as you probably expected, the performance is now better than for the model without regularization of feature weights. However, the performance is still worse than using L1 regularization (L1: 0.47 vs. L2: 0.43; see updated Table 1). Thus, although the difference is not as extreme as our original (flawed) comparison suggested, but still substantial. We believe this control provides evidence that sparsity is a valid assumption.\n\nWe also added scale bars to Fig. 6. The crops that are shown are 80x80 deg, i.e. covering three cycles for the *average* neuron. Classical RFs in mouse V1 can be as small as 5 deg (i.e. tiny in comparison to the crops shown in Fig. 6), which could explain why some of the features look rather small.", "Thank you for your interesting comments.\n\nAbout your question: \"Is it because of the high fitting accuracy of the rotation-equivariant CNN? Or just overfitting of the activity maximization process?\"\n\nHow would you distinguish between these two possibilities? Would the suggestion of Rev. 1 address your concern:\n\n\"Perhaps a comparison of RFs learned on two disjoint subsets of the training set would help to determine which features are reproducible.\"", "Thanks for your comments. As we also pointed out in the response to the reviewers, Fig. 6 is just a teaser and certainly should not be taken as a full-fledged analysis of non-linear response properties (which is a highly non-trivial problem in itself). We will try to make this point more explicit in the final version. A detailed analysis is forthcoming, but beyond the scope of this paper, which establishes the rotation-equivariant model as fitting the data better than regular CNNs.\n\nTo briefly address your questions:\n\n1) The preferred stimuli are obtained non-parametrically by directly optimizing in image space. What would be a good way of summarizing them? We could try to come up with some sort of classification, but we are not aware of a principled way of doing so. An interesting idea for future work would be to run the stimulus battery from [1] through our model and perform their classification in-silico.\n\n2) Researchers have indeed found non-Gabor optimal stimuli (you're citing one of them [1]). With respect to [2], they did not systematically investigate deviations from Gabor filters. The only piece of evidence for Gabors in their study are two example neurons and the relatively high average correlation with a fitted Gabor (their Fig. 5E). However, note that a Gabor with a low carrier frequency is a Gaussian blob, which will generate a relatively high correlation with a center-surround kernel. Also, note that there are a couple of differences to our study:\n\n a) Their optimization for finding optimal stimuli runs only for 10 iterations, which is probably closer to a linear approximation than running the optimization until convergence (as we do).\n\n b) Their stimuli are masked with an aperture of 60 deg, whereas ours are 120x90 deg. Therefore, it is possible that our stimuli elicited stronger surround modulation (the optimal images shown in our Fig. 6 are 80x80 deg crops).\n\n3) We did not present gratings in this experiment. However, there is evidence [3] that preferred orientations predicted from CNNs fit to natural image data generally match those obtained using oriented stimuli. Since #12 has pretty clearly oriented features in the center, we would indeed expect those cells to be tuned to orientation. #13 is less clear and would require more detailed investigation.\n\n[3] Sinz et al., bioRxiv. https://www.biorxiv.org/content/early/2018/10/25/452672", "I'm curious about the preferred stimuli shown in Fig. 6. \n\n1) Can you show a brief population summary on the preferred stimuli? e.g., what percentage of the cells exhibited Gabor filters? (cf., Fig 5D in [1])\n\n2) If the preferred stimulus of many V1 neurons are not Gabor filters, why most previous researches failed to reveal these classes? For example, a recent paper [2] have fitted CNN on the image-response function of V1 neurons and performed activity maximization (I think the approach looks very similar to yours though it’s not cited in the \"Related work\" section), but in this paper, most preferred images are well fitted to Gabor (Fig 5E). I am interested in where the differences arise. Is it because of the high fitting accuracy of the rotation-equivariant CNN? Or just overfitting of the activity maximization process?\n\n3) Also, if you presented drifting gratings during the two-photon imaging experiments, I'd like to know whether the neurons like cluster #12 or #13 are tuned to the orientations.\n\n[1] Tang, S., Lee, T. S., Li, M., Zhang, Y., Xu, Y., Liu, F., ... & Jiang, H. (2018). Complex pattern selectivity in macaque primary visual cortex revealed by large-scale two-photon imaging. Current Biology, 28(1), 38-48.\n\n[2] Ukita, J., Yoshida, T., & Ohki, K. (2018). Characterization of nonlinear receptive fields of visual neurons by convolutional neural network. bioRxiv, 348060.", "I acknowledge that I misunderstood the performance comparison. Providing a model with better fit quality is a considerable contribution.\n\nRegarding my other comments:\n\n1. Sparsity: One obvious control would be to use L2 instead of L1 regularization. Are L2-regularized models statistically significantly worse than L1-regularized models? Or did I miss a reason for why trying L2 regularization is not possible?\n\n2. Novelty of center-surround RFs: Perhaps this could be discussed in the text as you did in your response above.\n\n3. RF structure: Thanks, I am looking forward to the new analyses. Just to add more detail, one reason why I am skeptical is that many of the RF features, e.g. in 12 and 13 in Figure 6, look very small. What is the size of these features (in degrees of visual space)? A scale bar in the RF plots would be helpful. How does this compare to published values for the resolution of the mouse visual system (e.g. both Niell and Stryker, 2008, and Marshel et al., 2011, report a mean preferred spatial frequency of about 0.04 cycles per degree)?\n\n4. Orientation as a nuisance variable: Thanks for adding the additional discussion.\n\n5. Thanks for adding additional details.", "I would like to thank the authors for addressing thoroughly all my concerns. These clarifications confirm to me the rigor and quality of the work. \n\n\"Regarding your question about a single gradient step from a blank image: these indeed tend to look more similar to standard Gabor filters (we performed such a comparison in a different project not using rotation equivariance and not published yet). We can create a new Figure analogous to Fig. 6 but using a single gradient step and add it to the final version of the paper.\"\n\nI think this would be a valuable addition to the paper because it shows how the model can also account for our old conception of V1 RFs (gabor-like RFs obtained from white noise stimulation, which is the filter corresponding to the best linear approximation of the cell's response). It might also help addressing the concern of Rev. 1: \"Many of the receptive fields in Figure 6 look pathological (overfitted?) compared to typical V1 receptive fields in the literature.\"\n\nTo justify the mathematical equivalence of a one-step gradient ascent from a blank image to STA in response to a perturbative white-noise stimulus, I think you could cite:\n\nMelinda E. Koelling and Duane Q. Nykamp. Computing linear approximations to nonlinear neu-\nronal response.Network (Bristol, England), 19(4):286–313, 2008. ISSN 1361-6536. doi:10.1080/09548980802503139\n\nOdelia Schwartz, Jonathan W. Pillow, Nicole C. Rust, and Eero P. Simoncelli. Spike-triggered neural\ncharacterization.Journal of Vision, 6(4):13, July 2006. ISSN 1534-7362. doi: 10.1167/6.4.1", "Thank you for your careful review. We would like to start by clarifying the contributions of our paper before providing responses to your five specific points.\n\nAs also mentioned in the response to AnonReviewer3, our long-term goal is to find out whether V1 is organized in distinct, well-defined clusters of functional cell types. However, this is a complex biological question that will require additional, very careful and extensive data analysis as well as potentially further direct experimental verification. The contribution of the present paper are therefore not the biological insights, but instead the development and verification of methods that will allow us to address this question. Specifically, this means (1) adapting rotation-equivariant CNNs to the problem of predicting neural responses, (2) showing that it can successfully be trained using a steerable basis for the filters, (3) showing that it outperforms conventional CNNs, and (4) showing that it does so with substantially fewer features than previous methods. We revised the introduction to make this point more explicit.\n\nIn addition, on re-reading your comments, we believe there may be another misunderstanding regarding performance that we would like to clarify. You write:\n\n“The authors show that adding rotation equivariance improves the explanatory power of the model compared to non-rotation-equivariant models with similar numbers of parameters, but the performance is not better than other CNN-based models (e.g. Klindt et al., 2017).”\n\nThis sentence mentions “non-rotation-equivariant models” and “other CNN-based models (e.g. Klindt et al., 2017)” as if they were two different entities. However, the non-rotation-equivariant model we use is exactly that of Klindt et al. 2017 (more or less literally, their code is public). So our model *does perform better* than that of Klindt et al. 2017. The only reason why the numbers are not better is because it is a different (larger) dataset.\n\nWe hope this clarifies the contribution. Now to your comments about the biological findings:\n\n1. [Sparsity] \nWe agree that the fact that a model without L1 penalty performs worse does not mean that there is sparsity in the underlying data and we are open to suggestions on how to better substantiate that point. The unregularized model is clearly overfitting, and by using cross-validated regularization we can only get better. However, the fact that L1 regularization leads to such a strong improvement, does suggest that the sparsity assumption is pretty good. We would be grateful if you could expand on what you mean by “a more careful model selection analysis.”\n\n2. [Novelty of center-surround/asymmetric RFs]\nYou are right that these have of course been observed before (also long before Antolik 2016). However, what’s different is the prevalence of such non-Gabor RFs. For instance, if you look at Fig. 3 of Antolik 2016, you will notice that most RFs that are discernible are actually quite clean Gabors, while for the non-linear neurons (pink boxes in his Fig. 3) he does not obtain good RFs (unsurprisingly).\n\n3. [Pathological RFs in Fig. 6]\nThank you – this is a good suggestion. We will make sure to include such an analysis in the final version. Given our experience with activity maximization approaches, we expect them to look very similar overall, except for some of the noisy features in the surround (but certainly not more variable than across different neurons within the same group, which have different receptive field locations and have therefore seen different stimuli; see also response to AnonReviewer2). \n\n4. [Should orientation be treated as a nuisance variable?]\nWe agree that the visual system is neither equivariant to rotation nor to translation. However, that does not undermine the usefulness of equivariant representations at all. If your concern was valid and, for instance, horizontal filters looked completely different from vertical ones, then our rotation-equivariant network would not perform better than a regular CNN, because it would require a large number of features. But that’s not what we find. We find that the assumption of rotation equivariance (1) *does* lead to a substantial improvement over a regular CNN (see also our clarifications in the other comment) and (2) does so with fewer features, showing that it’s a good assumption to make. We added a sentence discussing this point (2nd sentence in the discussion). We will also add a plot showing the distribution of orientations being used at the readout stage for each feature, which will address the question of how rotation-equivariant the representation in mouse V1 actually is.\n\n5. [Experimental details]\nWe added those details (Section 4, subsection “Neural data”) and will show example traces in the final version.", "Thank you for reviewing our paper and providing constructive comments.\n\nRegarding the optimal stimuli presented in Fig. 6: Thank you for this suggestion. We will investigate whether an analysis of the optimal stimulus in the training set is helpful. Unfortunately, this may not be the case, because neural responses are highly variable and therefore the stimulus that happened to elicit the strongest response may not actually be the one the drives the neuron most on average. In addition, we have done such analysis in the past on networks trained for large-scale object recognition and found that one needs several hundred thousand image patches to find examples close to the optimum (and these networks do not have observation noise). Note, however, that the variability across neurons within one group is a good indicator of which aspects of the optimal stimuli are reproducible vs. noise, because the neurons have different receptive field locations and have therefore seen different stimuli during the experiment. See also the suggestion by AnonReviewer1 to split the data in two halves, which we will perform for the final version.\n\nWe will add the average correlations of the neurons shown in Fig. 6 as you suggested.\n\nRegarding your trouble with Fig. 4: We will make it full width, so it is more legible. Overall, the point of the figure is to show that for a large fraction of neurons the weights are very sparse. We are open to any suggestions on how to improve the Figure.", "Thank you for your thoughtful and constructive review. Below we respond to your seven comments:\n\n1. [Functional cell types]\nFinding out whether V1 is organized in distinct, well-defined clusters of functional cell types is indeed the big biological question we’re after. As you point out correctly, we do not answer this question in the present paper. The contribution of the present paper is not the biological finding that there are such well-defined cell types, but instead the development and verification of methods that allow us to address this question. With current methods (i.e. Klindt et al. 2017) we could not answer this question, because two Gabor filters with identical parameters except orientation would be considered two different cell types, which is undesirable from a biological perspective. The methods presented in our paper overcome this gap and let us treat preferred orientation as a nuisance just like receptive location. Therefore, we think it is appropriate to start the paper with these considerations, as they put our work in context by stating the long-term goals. We rephrased the introduction (third paragraph) to state the contributions more clearly. Please let us know if this revision addresses your concern or if you think that a more substantial revision of the introduction is necessary.\n\n2. [CNN trained on neural data, not image categorization] \nThank you. We revised the abstract (2nd sentence), introduction (third paragraph) and Section 3 (last sentence of first paragraph). Let us know if it’s still not clear.\n\n3. [Functional cell types #2]\nYou have a point. We changed the wording to “functional groups” wherever we describe what we did. The only place where we refer to functional cell types is the introduction where we provide the background/context of our work.\n\n4. [Batch norm]\nBatch normalization serves two purposes here: (1) it helps training and (2) it ensures the features of the last CNN layer have unit variance, which is useful given the L1 penalty on the readout weights. Note that at test time, it does not have an effect. The normalization constants can be fully absorbed into the linear weights. In other words, if we gave you a trained model, you would not be able to tell whether it was trained with batch normalization or without, because they are indistinguishable at test time.\n\n5. [Non-standard filters]\nWe agree that they are not directly comparable. The point is that this model-based procedure reveals deviations from the linear models. It not only shows that spike-triggered average from white noise is insufficient for characterizing V1 neurons, but also provides a means for characterizing them. Regarding your question about a single gradient step from a blank image: these indeed tend to look more similar to standard Gabor filters (we performed such a comparison in a different project not using rotation equivariance and not published yet). We can create a new Figure analogous to Fig. 6 but using a single gradient step and add it to the final version of the paper.\n\n6. [Rotation-invariant neurons]\nNo, except for the trivial ones that have circularly symmetric center-surround receptive fields (or preferred stimuli), we did not find any evidence for rotation invariant neurons.", "Thanks for your quick reply and the clarification. We will address the other points shortly. Just one more quick comment about performance, because we think it's important to be on the same page.\n\nFor a given dataset, the difference in performance between our model and a conventional CNN is *not* within measuring variability. We report the SD of the best models in Table 1, and the difference is several SDs between the two models.\n\nThe difference in performance *across datasets* is not very meaningful, because the absolute numbers depend on factors such as the signal-to-noise ratio of the recordings, the brain state of the animal and the number of repeats per image. We should have avoided that one sentence about performance being comparable to earlier studies for this very reason.", "In this interesting study, the authors show that incorporating rotation-equivariant filters (i.e. enforcing weight sharing across filters with different orientations) in a CNN model of the visual system is a useful prior to predict responses in V1. After fitting this model to data, they find that the RFs of model V1 cells do not resemble the simple Gabor filters of textbooks, and they present other quantitative results about V1 receptive fields. The article is clearly written and the claims are supported by their analyses. It is the first time to my knowledge that a rotation-equivariant CNN is used to model V1 cells.\n\nThe article would benefit from the following clarifications:\n\n1. The first paragraph of the introduction discusses functional cell types in V1, but the article does not seem to reach any new conclusion about the existence of well-defined clusters of functional cell types in V1. If this last statement is correct, I believe it is misleading to begin the article with considerations about functional cell types in V1. Please clarify.\n\n2. For clarity, it would help the reader to mention in the abstract, introduction and/or methods that the CNN is trained on reproducing V1 neuron activations, not on an image classification task as in many other studies (Yamins 2014, etc). \n\n3. “As a first step, we simply assume that each of the 16 features corresponds to one functional cell type and classify all neurons into one of these types based on their strongest feature weight.” and “The resulting preferred stimuli of each functional type are shown in Fig. 6.“\nAgain, I think these statements are misleading because they suggest that V1 cells indeed cluster in distinct functional cell types rather than form a continuum. However, from the data shown, it is unclear whether the V1 cells recorded form a continuum or distinct clusters. Unless this question is clarified and the authors show the existence of functionally distinct clusters in their data, they should preferably not mention \"cell types\" in the text.\n\nSuggestions for improvement and questions (may not necessarily be addressed in this paper):\n\n4. “we apply batch normalization”\nWhat is the importance of batch normalization for successfully training the model? Do you think that a sort of batch normalization is implemented by the visual system? \n\n5. “The second interesting aspect is that many of the resulting preferred stimuli do not look typical standard textbook V1 neurons which are Gabor filters. ”\nOK but the analysis consists of iteratively ascending the gradient of activation of the neuron from an initial image. This cannot be compared directly to the linear approximation of the V1 filter that is computed experimentally from doing a spike-triggered average (STA) from white noise. A better comparison would be to do a single-step gradient ascent from a blank image. In this case, do the filters look like Gabors?\n\n6. Did you find any evidence that individual V1 neurons are themselves invariant to a rotation?\n\n7. The article could be more self-contained. There are a lot of references to Klindt et al. (2017) on which this work is based, but it would be nice to make the article understandable without having to read this other article.\n\nTypo: Number of fearture maps in last layer \n\nConclusion:\nI believe this work is significant and of interest for the rest of the community studying the visual system with deep networks, in particular because it finds an interesting prior for modeling V1 neurons, that can probably be extended to the rest of the visual system. However, it would benefit from the clarifications mentioned above.", "The paper analyses the data collected from 6005 neurons in a mouse brain. Visual stimuli are presented and the responses of the neurons recorded. In the next step, a rotational equivariant neural network architecture together with a sparse coding read-out layer is trained to predict the neuron responses from the stimuli. Results show a decent correlation between neuron responses and trained network. Moreover, the rotational equivariant architecture beats a standard CNN with similar number of feature maps. The analysis and discussion of the results is interesting. Overall, the methodological approach is good.\n\nI have trouble understanding the plot in Figure 4, it also does not print well and is barely readable on paper.\n\nI have a small problem Figure 6 where \"optimal\" response-maps are presented. From my understanding, many of those feature maps are not looking similar to feature maps that are usually considered. Given the limited data available and the non-perfect modeling of neurons, the computed optimal response-map might include features that are not present in the dataset. Therefore, it would be interesting to compare those results with the stimuli used to gather the data. E.g. for a subset of neurons, one could pick the stimulus that created the maximum response and compare that to what the stimulus with the maximum response of the trained neuron was. It might be useful to include the average correlation of the neurons belong to each of the 16 groups(if there are any meaningful differences), especially as the cut-off of \"correlation 0.2 on the validation set\" is rather low.\n\nNote: I am not an expert in the neural-computation literature, I am adapting the confidence rating accordingly.", "Thanks for the comment. A fair assessment, given the variability between datasets, would be that your model performs similarly to previous studies, as you stated in the text (\"Performance was comparable to earlier studies modeling V1 responses with similar stimuli (Klindt et al., 2017; Antolík et al., 2016).\n\nI agree that you made a very rigorous effort in comparing the models (e.g. the hyperparameter search for control models).\n\nMy thinking was that an improvement in model fit quality cannot be considered a main contribution of the paper, because the difference is within measuring variability. In any case, I don't think it is essential to show a performance improvement. The insights gained from the model are more important. This is what my main comments are about.", "Thank you for reviewing our paper. We would like to make a quick clarification right away, which we hope will change your assessment. We will provide a more detailed response to the other comments later.\n\nThere seems to be a misunderstanding about the performance of the model. As shown in Table 1, our rotation-equivariant CNN does outperform a regular CNN (Klindt et al. 2017). \n\nA couple of more detailed points to also keep in mind in this respect:\n- We are quite conservative with the model comparison: Table 1 shows the rotation-equivariant model with 16 features, which is not even the best-performing one among all the rotation-equivariant ones we tested (Fig. 2).\n- Related to above, the regular CNN has been subjected to an equally rigorous hyperparameter search, with the range of hyperparameters taken from Klindt et al (2017). Thus, the comparison is as fair as we can make it.\n- The performance in absolute numbers is lower than that in Klindt et al. (2017), but these numbers are not comparable because different datasets are used. There is quite some variability across datasets (see, e.g., Table 1 in Klindt et al. 2017)." ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2019_H1fU8iAqKX", "iclr_2019_H1fU8iAqKX", "rygl-JTFCQ", "ByeMgTF40X", "BylsquoYCQ", "SklJEXPI0m", "HkxrJkvlAX", "BJgDxJhrC7", "HkesEK4KaX", "HkxrJkvlAX", "S1epMlHQRX", "S1epMlHQRX", "iclr_2019_H1fU8iAqKX", "Bkl4DDIda7", "SkeaZwLuaQ", "HklYxH1ShX", "SylAnad4om", "Skx-rp2qnQ", "HkerUYEy67", "iclr_2019_H1fU8iAqKX", "iclr_2019_H1fU8iAqKX", "rJgAi-CC3Q", "HklYxH1ShX" ]
iclr_2019_H1g0Z3A9Fm
Supervised Community Detection with Line Graph Neural Networks
Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio. By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective. We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting. We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases. In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies. The GNNs are achieved good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima.
accepted-poster-papers
This paper introduces a new graph convolutional neural network, called LGNN, and applied it to solve the community detection problem. The reviewers think LGNN yields a nice and useful extension of graph CNN, especially in using the line graph of edge adjacencies and a non-backtracking operator. The empirical evaluation shows that the new method provides a useful tool for real datasets. The reviewers raised some issues in writing and reference, for which the authors have provided clarification and modified the papers accordingly.
train
[ "S1lor2mcAQ", "H1g2iCOrC7", "SkxI53uPpX", "rylL0sdwT7", "HkgV9quwam", "SyxP9MFRhm", "rklkDauTn7", "r1x6e8mXhX" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank again our three reviewers for their time and high-quality feedback. We have integrated their comments into an updated manuscript. The main changes include:\n\n-- ablation experiments of our GNN/LGNN architectures, in Sections 6.1 and 6.2\n-- fixed several typos.\n-- clarified assumptions of our landscape analysis (and mention that an open question is to study their validity in SBM models). (Section 5). \n-- clarified finite-sample effects in our computational-to-statistical gap results (Section 6.2).", "Possibly it would help the reader, in order to connect the different parts of the paper, if the authors say in Section 5 explicitly that specifying the region of parameters for which these assumptions are satisfied for the SBM (and other models) is an open question. \n\nOtherwise I find the suggested adjustments satisfactory, and maintain my original rating. \n", "Thank you very much for the constructive and high-quality comments. \n \n“…why this paper restricts itself to community detection, rather than general node-classification problems for broader audience”\n \nThe reason why we restrict ourselves to community detection problems is that it is a relatively well-studied setup, for which several algorithms have been proposed, and where computational and statistical thresholds are known in several cases. In addition, synthetic datasets can be easily generated for community detection. Therefore, we think it is a good testbed for comparing different algorithms. However, it is a very good point that GNN and LGNN can be applied to other node-wise classification problems as well. We will modify the text to highlight this point. \n \n“To make sure the actual gain of LGNN, this needs be done with some ablation studies.”\n \nThis is a valid suggestion. You correctly pointed out that GAT does not utilize the degree matrix directly, and so we are planning to perform ablation experiments by removing the degree matrix from GNN and LGNN. We did add spatial batch normalization steps to the GAT and MPNN models we used, and in the experiments we found that spatial batch normalization is crucial for the performance of the models including GNN, LGNN, GAT and MPNN. The reason for this is outlined at the end of Section 4.1, in which we assimilate the spatial normalization with removing the DC component of node features, which is aligned with the eigenvector of the adjacency matrix of leading eigenvalue. \n\n \n “The performance gain is not so significant compared to other simpler baselines, so the net contribution of the line-graph extension is unclear considering the above.”\n \nAlthough not all differences in the results are statistically significant (where we consider 2 sigma to be significant), we still think it is worth noting that in all of the experiments (binary SBM, 5-class dissociative SBM, GBM and SNAP data), LGNN achieved better averaged performance than all other algorithms, including GNN without line graph included. We also note that the complexity in operations/memory of using LGNN is the same as the alternative edge-learning methods we compared against, so these gains come essentially for free.\n \n\"The experimental section considers only a few number of classes (2-5) so that it’s does not show how it scales with a large number of classes\"\n \nThis is indeed an interesting direction for future research. We will highlight this current limitation and discuss possible routes.\n", "We sincerely thank the reviewer for his time and constructive comments. \n\nRegarding the reference of Krzakala et al., 2013, “Spectral redemption in clustering sparse networks”, you are correct that we should mention the fact that it introduced the non-backtracking operator for community detection. Thanks for this important remark, this is in fact a landmark paper central to our construction.\n \n“On the Computational-Statistical Gap Experiment”\nIt is correct that the computational and statistical thresholds for detection are defined asymptotically, and therefore our experimental results with finite-size graphs do not contradict those thresholds. We only hoped to demonstrate the good performance of the GNN and LGNN models in these scenarios. We hypothesize two possible scenarios: either that the network is picking up finite-size effects that standard BP is unable to exploit, either that the network actually improves asymptotic detection. We are currently exploring this question and hoping to provide some answers to it. In any case, we appreciate your comment, and will modify the statement of the implication of our experimental results in the paper.\n", "We very much appreciate the compliments as well as the comments on the several claims in the paper.\n \nBy “improving upon current computational thresholds in hard regimes,” indeed we meant to say that the results on finite-size graphs of our algorithms are better than those of belief propagation, which is known to reach the computational threshold of such problems. We will change the phrasing of the claim in the paper.\n \n“On the simplifications of the energy landscape analysis”:\nThe simplifications that we made in the theoretical analysis are actually discussed in detail in section 5, including using squared cosine distance in place of cross-entropy loss, using a single feature map, removing nonlinearities, replacing spatial batch normalization by projection onto the unit l_2 ball, as well reparametrizing the network’s parameters according to the Krylov subspace generated by the set of operators. Assumptions are the four quantities defined in Theorem 5.1 are finite. It is indeed a highly interesting question under which of graphs (for example, for what regimes of the stochastic block model) these assumptions are satisfied. We don’t have theoretical results for this question yet, although it will certainly be of great interest to future work.\n \nOn \"multilinear fully connected neural networks whose landscape is well understood (Kawaguchi, 2016).\" this is in my opinion grossly overstated.” \n\nThe reviewer is correct in that the optimization landscape of deep, nonlinear neural networks is still far from understood. We were referring to the case with no activation functions (multilinear), in which the situation is much simpler. We will modify the text to make sure there is no ambiguity. \n", "This paper introduces a novel graph conv neural network, dubbed LGNN, that extends the conventional GNN using the line graph of edge adjacencies and a non-backtracking operator. It has a form of learning directed edge features for message-passing. An energy landscape analysis of the LGNN is also provided under linear assumptions. The performance of LGNN is evaluated on the problem of community detection, comparing with some baseline methods. \n\nI appreciate the LGNN formulation as a reasonable and nice extension of GNN. The formulation is clearly written and properly discussed with message passing algorithms and other GNNs. Its potential hierarchical construction is also interesting, and maybe useful for large-scale graphs. In the course of reading this paper, however, I don’t find any clear reason why this paper restricts itself to community detection, rather than general node-classification problems for broader audience. It would have been more interesting if it covers other classification datasets in their experiments. \n\nMost of the weak points of this paper lie in the experimental section. \n1. The experimental sections do not have proper ablation studies, e.g., as follows. \nAs commented in Sec 6.3, GAT may underperform due to the absence of the degree matrix and this needs to be confirmed by running GAT with the degree term. And, as commented in footnote 4, the authors used spatial batch normalization to improve the performance of LGNN. But, it’s not clear how much it obtains for each experiment and, more importantly, whether they use the same spatial batch norm in other baselines. To make sure the actual gain of LGNN, this needs be done with some ablation studies. \n2. The performance gain is not so significant compared to other simpler baselines, so the net contribution of the line-graph extension is unclear considering the above. \n3. The experimental section considers only a few number of classes (2-5) so that it’s does not show how it scales with a large number of classes. In this sense, other benchmark datasets with more classes (e.g., PPI datasets used in GAT paper) would be better. \n\nI hope to get answers to these. ", "Graph Neural Networks(GNN) are gaining traction and generating a lot of interest. In this work, the authors apply them to the community detection problem, and in particular to graphs generated from the stochastic block model. The main new contribution here is called \"line graph neural network\" that operate directly over the edges of the graph, using efficiently the power of the \"non backtracking operator\" as a spectral method for such problems.\n\nTraining such GNN on data generated from the stochastic block model and other graph generating models, the authors shows that the resulting method can be competitive on both artificial and real datasets.\n\nThis is definitely an interesting idea, and a nice contribution to GNN, that should be of interest to ICML folks.\n\nReferences and citations are fine for the most part, except for one very odd exception concerning one of the main object of the paper: the non-backtracking operator itself! While discussed in many places, no references whatsoever are given for its origin in detection problems. I believe this is due to (Krzakala et al, 2013) ---a paper cited for other reasons--- and given the importance of the non-backtracking operator for this paper, this should be acknowledged explicitly.\n\nPro: Interesting new idea for GNN, that lead to more powerful method and open exciting direction of research. A nice theoretical analysis of the landscape of the graph. \n\nCon:The evidence provided in Table 1 is rather weak. The hard phase is defined in terms of computational complexity (polynomial vs exponential) and therefore require tests on many different sizes.\n\n", "This paper presents a study of the community detection problem via graph neural networks. The presented results open the possibility that neural networks are able to discover the optimal algorithm for a given task. This is rather convincingly demonstrated on the example of the stochastic block model, where the optimal performance is known (for 2 symmetric groups) or strongly conjectured (for more groups). The method is rather computationally demanding, and also somewhat unrealistic in the aspect that the training examples might not be available, but for a pioneering study of this kind this is well acceptable.\n\nDespite my overall very positive opinion, I found a couple of claims that are misleading and overall hurt the quality of the paper, and I would strongly suggest to the authors to adjust these claims:\n\n** The method is claimed to \"even improve upon current computational thresholds in hard regimes.\" This is misleading, because (as correctly stated in the body of the paper) the computational threshold to which the paper refers apply in the limit of large graph sizes whereas the observed improvements are for finite sizes. It is shown here that for finite sizes the present method is better than belief propagation. But this clearly does not imply that it improves the conjectured computational thresholds that are asymptotic. At best this is an interesting hypothesis for future work, not more. \n\n** The energy landscape is analyzed \"under certain simplifications and assumptions\". Conclusions state \"an interesting transition from rugged to simple as the size of the graphs increase under appropriate concentration conditions.\" This is very vague. It would be great if the paper could offer intuitive explanation of there simplifications and assumptions that is between these unclear remarks and the full statement of the theorem and the proof that I did not find simple to understand. For instance state the intuition on in which region of parameters are those results true and in which they are not. \n\n** \"multilinear fully connected neural networks whose landscape is well understood (Kawaguchi, 2016).\" this is in my opinion grossly overstated. While surely that paper presents interesting results, they are set in a regime that lets a lot to be still understood about landscape of fully connected neural networks. It is restricted to specific activation functions, and the results for non-linear networks rely on unjustified simplifications, the sample complexity trade-off is not considered, etc. \n\n\nMisprint: Page 2: cetain -> certain. \n" ]
[ -1, -1, -1, -1, -1, 6, 9, 8 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2019_H1g0Z3A9Fm", "HkgV9quwam", "SyxP9MFRhm", "rklkDauTn7", "r1x6e8mXhX", "iclr_2019_H1g0Z3A9Fm", "iclr_2019_H1g0Z3A9Fm", "iclr_2019_H1g0Z3A9Fm" ]
iclr_2019_H1g2NhC5KQ
Multiple-Attribute Text Rewriting
The dominant approach to unsupervised "style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its "style''. In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations. We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space. Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes.
accepted-poster-papers
The paper shows how techniques introduced in the context of unsupervised machine translation can be used to build a style transfer methods. Pros: - The approach is simple and questions assumptions made by previous style transfer methods (specifically, they show that we do not need to specifically enforce disentanglement). - The evaluation is thorough and shows benefits of the proposed method - Multi-attribute style transfer is introduced and benchmarks are created - Given the success of unsupervised NMT, it makes a lot of sense to see if it can be applied to the style transfer problem Cons: - Technical novelty is limited - Some findings may be somewhat trivial (e.g., we already know that offline classifiers are stronger than the adversarials, e.g., see Elazar and Goldberg, EMNLP 2018).
val
[ "HklLDIyapm", "HygNdVy66m", "r1eBLQ1pT7", "Skxizm1aaQ", "rylhcb16TX", "BkekG-1TaQ", "H1xk7NH9pX", "SkgGgYEt6m", "Hkgb4vEFpQ", "B1la-jxRhQ", "S1ll1KQ5hX", "H1lXb9okiX", "BkxKA_vRnX", "rylWfjH0hQ" ]
[ "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public" ]
[ "Thank you for your question. As AnonReviewer3 mentioned, simply copying input sentences wouldn’t satisfy the auto-encoding part of equation (1), as noise has been added to sentences. However, it would indeed satisfy the back-translation loss.\n\nThe idea of denoising here is that by removing random words from a sentence, we hope to remove words that are required to infer the style.\nFor instance, if the input sentence is: “this place is awful”\nand that the noised sentence becomes: “this place is <BLANK>”,\nthe model will be trained to recover “this place is awful”\nfrom: (“this place is <BLANK>”, ATTRIBUTE=NEGATIVE)\n\nSince there might be a lot of occurrences of “this place is amazing” in the dataset, the model will have to learn to consider the provided attribute in order to give a high probability to “awful” without penalizing the perplexity on the positive reviews.\n\nThe general argument is that the decoder needs to learn to use the attribute information whenever the input to the system is very noisy. This applies as well when inputs come from the back-translation process. Noisy inputs are produced in the back-translation process at the beginning of training when the model is insufficiently trained and does not generate well, and when generations are produced at high temperature. When using high softmax temperatures, the model tends to exhibit lower content preservation and higher attribute transfer since the generated samples are very noisy and it is therefore more difficult to recover the original input in the back-translation process while the decoder is forced to better leverage the attribute information.", "Thank you for your comment. We’ve added a comparison with Hu et al., 2017 in the revised paper using the code you mentioned. We found that this model obtained a good accuracy / BLEU score, but with pretty high perplexity. We’ve also added a reference to Yang et al in the related work section, thank you for pointing this out.", "“I think the last and most critical question is what the expected style-transferred rewriting look like. What level or kind of \"content-preserving\" do we look for?” - This is a great question, and a fundamental open research problem which, as far as we know, does not have a clear answer in existing literature. In our paper, we view this line of research as looking for better ways to generate rewrites of text along certain directions, and exactly the “kind” of what content is being preserved would ideally be one of the “knobs” that a system can control. The phrase “style transfer” is useful to refer to previous work that have adopted it from the image domain, but its framing is a bit narrow for the scope of rewriting types our work addresses. We believe that the trade-off between attribute control and content preservation should depend on two factors 1) the eventual use case of such a system (and style transfer is one use case, but another one would be to obtain more “interesting” and varied generations by augmenting a retrieval system with rewriting capabilities in a controllable way, and 2) the nature of attributes being controlled. Firstly, in contrast to previous work, we present means to control this inherent trade-off in the form of a latent-space pooling operator which can adapted to a particular use case. Secondly, the proposed method is fundamentally one that learns an unsupervised mapping between two or more domains of text, and the nature of the learned mapping will certainly depend on the nature of the domains. For example, it is often possible to map between the positive and negative domains by replacing a few words or small phrases and as a result, we can expect our models to preserve a lot of the input. By contrast, attributes such as one’s age aren’t as “local” and might require rewriting more content to successfully be altered. In that case, the content that is being preserved might be the general structure of the text, its sentiment, etc. To make the trade-off clearer, we have added a figure to the manuscript showing how it varies across training (Fig. 1 in the appendix); we also include illustrations of rewrites at different trade-off levels in Table 13.\n\n“Towards the end of Section 3, it says that \"without back-propagating through the back-translation generation process\". Can you elaborate on this and the reason behind this choice?” - Back-propagating through the back-translation process would require computing gradients through a sequence of discrete actions since generations are sampled from the decoder. While this may be achieved via policy-gradient methods such as REINFORCE or other approximations like the Gumbel-softmax trick, these have been known to perform very poorly in high dimensional action spaces due to high variance of the gradient estimates. This approach also has the disadvantage of biasing the model towards the degenerate solution of copying the input while ignoring attribute information entirely to satisfy the cycle-consistency objective, since the gradients flow through the entire cycle, which is what we observed in practice.\n\n“What does it mean by \"unknown words\" in \"... with 60k BPE codes, eliminating the presence of unknown words\" from Section 4?” - We meant that by using BPE, we can operate without replacing infrequent words with an <unk> token -- we do not have unknown words because these are decomposed into subword units that belong to the BPE dictionary.\n\n“what is the difference among the three \"Ours\" model?” - These models differ in the choice of hyperparameters (pooling kernel width and back-translation temperature) to demonstrate our model’s ability to control the content preservation vs attribute control trade-off. We have clarified this in the table caption.\n\n“the perplexity of \"Input Copy\" is very high compared with generated sentences.” - This is true and we believe that this is a consequence of the fact that there is more diversity in the input reviews than in typical generations from ours and other systems. This lack of diversity is typical for models decoding with beam search, which leads to \"mode seeking behavior\" wherein the output generations contain fragments that occur most frequently in the training set. This results in the pre-trained LM assigning high likelihoods to these samples.\n\n“what does the \"attention\" refer to?” - The row in Table 7 that corresponds to \"-attention\" refers to a model that was trained without an attention mechanism in a vanilla sequence-to-sequence fashion, using the last hidden state of the encoder by concatenating it to the word embeddings at every time step of the decoder.\n\n“In the supplementary material, there are lambda_BT and lambda_AE. But there is only one lambda in the loss function (1).” -Thank you for spotting this typo. We fixed this in the revised version of the paper.", "“Is there any difference between the two discriminators/classifiers?” - The discriminator and classifier have completely identical architectures - a 3 layer MLP with 128 dimensional hidden layers and LeakyReLU activations (now clarified in the model architecture paragraph in Section 3.3). We used two different terms to describe them since the classifier is fit post-hoc and doesn’t adapt to the encoder representations in a min-max fashion while the discriminator does. Moreover, the classifier is fully trained on the final encoder representations, while the discriminator is “chasing” them without fully training after each and every update of the encoder representations. This is indeed a bit confusing, and we have clarified this in the paper. While a discriminator trained more thoroughly at each iteration might disentangle representations more, our goal was not to look at whether disentangled representations can result in better performance, but whether current training practices actually result in disentangled representations (see responses below as well).\n\n“there should be enough signal from the discriminator to adapt the encoder in order to learn a more disentangled representation.” - This is a valid concern, but the experiments we ran suggest that this does not change the main observation. For instance, we also experimented with larger coefficients of adversarial training of 1.0 and 10.0 (as well as no adversarial training on the other end of the spectrum). While the attribute recovery accuracy drops a little at higher coefficients, it is still much higher than the discriminator accuracy during training. Also, models trained with high adversarial training coefficients have extremely high reconstruction and back-translation losses. Results are presented below, for better formatting please refer to the revised version of our paper.\n\n Coef Disc(acc) Clf(acc)\n 0 &\t89.45% & 93.8%\n 0.001 &\t85.04% & 92.6%\n 0.01 &\t75.47% & 91.3%\n 0.03 &\t61.16% & 93.5%\n 0.1 &\t57.63% & 94.5%\n 1.0 &\t52.75% & 86.1%\n 10 &\t51.89% & 85.2%\n\n“On the other hand, this does not answer the question if a \"true\" disentangled representation would give better performance. The inferior performance from the adversarially learned models could be because of the \"entangled\" representations.” - We agree completely. Our point is not that disentangled representations would not lead to good performance, but simply that disentanglement doesn't happen in practice with the kind of adversarially trained models typically used for this problem. We have made changes to the writing to make our stance clearer.\n\n“Request for ablation study on pooling and other architectural design choices.” - In addition to the averaged attribute embeddings, we also explored using a separate embedding for each attribute combination in the cross-product of all possible attribute values. We found this to have similar performance to our averaging method. We decided against concatenating embeddings because we use the attribute embedding as the first input token to the decoder, and using a concatenation would mean dividing the embedding size for each attribute value by the number of attributes, to maintain to overall embedding size. This wouldn’t scale as well to settings with many possible attributes. We settled on the attribute embedding averages because of its simplicity.\n\nWe have included a plot (Figure 1) that shows the evolution of attribute control (accuracy) and content preservation (BLEU) over the course of training as a function of the pooling kernel width. This demonstrates the latent space pooling operator’s ability to trade off self-BLEU and accuracy - larger kernel widths favor attribute control while smaller ones favor content preservation.\n\n“As long as the \"back-translation\" gives expected result, it seems not necessary to have \"meaningful\" or hard \"content-preserving\" latent representations when the generator is powerful enough.”\nWe observed that operating without a DAE objective didn’t work since the model needs to be bootstrapped to be capable of producing outputs that are at least somewhat close to the original input before the back-translation process can take over. At the beginning of training, it is nearly impossible for the model to be able to recover the original input starting from a nearly random sequence of words. But it’s indeed true that later on the back-translation loss is enough: in practice, we in fact removed the DAE objective by progressively decreasing lambda_AE from 1 to 0 over the first 300,000 iterations (c.f. Appendix section), even though we didn’t observe a significant difference compared to simply fixing lambda_AE to 1.", "To make the architecture clearer, we updated the paper and added a paragraph, describing the architecture of the model in the “Implementation” section. That paragraph was previously in the appendix -- we hope inserting it into the main body makes the paper easier to follow. \n\nAs for our additions to the model, the methodology we used is similar to previous approaches in unsupervised machine translation, but with two key differences.\n\nFirst, our approach can handle multiple attributes, while previous approaches usually only consider two different domains (one for the positive reviews, and one for the negative reviews, for instance) and cannot be easily extended to multiple domains as they typically require one encoder and one decoder per domain. Our approach can handle multiple attributes at the same time, including categorical attributes (e.g. Table 9 in the Appendix).\n\nAlso, we introduced a pooling operator and we found it to be critical in our experiments. The problem we observed is that without it, the model has a tendency to converge to the “copy mode”, where it simply copy words one by one, without taking the attribute input into consideration. We included a plot in the ablation study (Figure 1) that shows the evolution of the attribute transfer accuracy and the content preservation over training, for different pooling layer configurations. We can see that without the pooling operator, the model directly converges to the “copy mode”, with a self-BLEU close to 90 after only a few epochs. A pooling operator with a window of size 8 not only alleviates this issue, but it also provides intermediate models during training with different trade-offs between content-preservation and attribute transfer.", "Thank you for your review. We are glad to see that you liked the paper and it's contributions.", "Thanks for the comment! Yes, the denoising auto-encoder part could prevent the directly copying. However, it still couldn't guarantee the generated styles. If the auto-encoder totally ignores the style embedding and only learns to reconstruct the input sentences (even with noises), the equation. (1) is still converged. Hope the authors would discuss this issue. ", "The first part of the loss function (1), the denoising auto-encoder part, would help prevent the situation described. Since the input would have noise added, the simple copy operation can not be learned directly. But I still prefer the authors would give some discussions and maybe quantitative results regarding this.", "Thanks for the interesting work. The results look amazing. I have a question about the loss function (Equation. 1). The loss function only consists of the reconstruction loss and another type reconstruction loss related to the back-translation, and there is no adversarial loss or classification loss to regularize the generated styles. How do you guarantee the generated sentences have correct styles?\n\nI can imagine that there is a local minimum of Equation. 1, where the decoders completely ignores the input style embedding and directly copy the input sentence. In this case, no matter which style you used, the input and output are the same, and the loss is zero. I'm wondering how do you prevent this situation happens?\n\nLooking forward to seeing the answers!", "This paper presents a model for text rewriting for multiple attributes, for example gender and sentiment, or age and sentiment. The contributions and strengths of the paper are as follows. \n\n* Problem Definition\nAn important contribution is the new problem definition of multiple attributes for style transfer. While previous research has looked at single attributes for rewriting, \"sentiment\" for example, one could imagine controlling more than one attribute at a time. \n\n* Dataset Augmentation\nTo do the multiple attribute style transfer, they needed a dataset with multiple attributes. They augmented the Yelp review dataset from previous related paper to add gender and restaurant category. They also worked with microblog dataset labeled with gender, age group, and annoyed/relaxed. In addition to these attributes, they modified to dataset to include longer reviews and allow a larger vocabulary size. In all, this fuller dataset is more realistic than the previously release dataset.\n\n* Model\nThe model is basically a denoising autoencoder, a well-known, relatively simple model. However, instead of using an adversarial loss term as done in previous style transfer research, they use a back-translation term in the loss. A justification for this modeling choice is explained in detail, arguing that disentanglement (which is a target of adversarial loss) does not really happen and is not really needed. The results show that the new loss term results in improvements.\n\n* Human Evaluation\nIn addition to automatic evaluation for fluency (perplexity), content preservation (BLEU score), and attribute control (classification), they ask humans to judge the output for the three criteria. This seems standard for this type of task, but it is still a good contribution.\n\nOverall, this paper presents a simple approach to multi-attribute text rewriting. The positive contributions include a new task definition of controlling multiple attributes, an augmented dataset that is more appropriate for the new task, and a simple but effective model which produces improved results.", "This work proposes a new model that controls several factors of variation in textual data where the condition on disentanglement is replaced with a simpler mechanism based on back-translation. It allows control over multiple attributes, and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space.\n\nOne of the major arguments is it is unnecessary to have attribute-disentangled latent representations in order to have good style-transferring rewriting. In Table 2, the authors showed that \"a classifier that is separately trained on the resulting encoder representations has an easy time recovering the sentiment\" when the discriminator during training has been fooled. Is there any difference between the two discriminators/classifiers? If the post-fit classifier on top of the encoder representation can easily predict the correct sentiment, there should be enough signal from the discriminator to adapt the encoder in order to learn a more disentangled representation. On the other hand, this does not answer the question if a \"true\" disentangled representation would give better performance. The inferior performance from the adversarially learned models could be because of the \"entangled\" representations.\n\nAs the author pointed out, the technical contributions are the pooling operator and the support for multiple attributes since the loss function is the same as that in (Lample et. al 2018). These deserve more elaborated explanation and quantitative comparisons. After all, the title of this work is \"multiple-attribute text rewriting\". For example, the performance comparison between the proposed how averaged attribute embeddings and simple concatenation, and the effect of the introduced trade-off using temporal max-pooling.\n\nHow important is the denoising autoencoder loss in the loss function (1)? From the training details in the supplementary material, it seems like the autoencoder loss is used as \"initialization\" to some degree. As pointed out by the authors, the main task is to get fluent, attribute-targeted, and content-preserving rewriting. As long as the \"back-translation\" gives expected result, it seems not necessary to have \"meaningful\" or hard \"content-preserving\" latent representations when the generator is powerful enough.\n\nI think the last and most critical question is what the expected style-transferred rewriting look like. What level or kind of \"content-preserving\" do we look for? In Table 4, it shows that the BLEU between the input and the referenced human rewriting is only 30.6 which suggest many contents have been modified besides the positive/negative attribute. This can also be seen from the transferred examples. In Table 8, one of the Male example: \"good food. my wife and i always enjoy coming here for dinner. i recommend india garden.\" and the Female transferred rewriting goes as \"good food. my husband and i always stop by here for lunch. i recommend the veggie burrito\". It's understandable that men and women prefer different types of food even though it is imagination without providing context. But the transfer from \"dinner\" to \"lunch\" is kind of questionable. Is it necessary to change the content which is irrelevant to the attributes?\n\n\nOther issues:\n- Towards the end of Section 3, it says that \"without back-propagating through the back-translation generation process\". Can you elaborate on this and the reason behind this choice?\n- What does it mean by \"unknown words\" in \"... with 60k BPE codes, eliminating the presence of unknown words\" from Section 4?\n- There is no comparison with (Zhang et. al. 2018), which is the \"most relevant work\".\n- In Table 4, what is the difference among the three \"Ours\" model?\n- In Table 4, the perplexity of \"Input Copy\" is very high compared with generated sentences.\n- In Table 7, what does the \"attention\" refer to?\n- In the supplementary material, there are lambda_BT and lambda_AE. But there is only one lambda in the loss function (1).\n- Please unify the citation style.", "The paper proposes \"style transfer\" approaches for text rewriting that allow for controllable attributes. For example, given one piece of text (and the conditional attributes associated with the user who generated it, such as their age and gender), these attributes can be changed so as to generate equivalent text in a different style.\n\nThis is an interesting application, and somewhat different from \"style transfer\" approaches that I've seen elsewhere. That being said I'm not particularly expert in the use of such techniques for text data.\n\nThe architectural details provided in the paper are quite thin. Other than the starting point, which as I understand adapts machine translation techniques based on denoising autoencoders, the modifications used to apply the technique to the specific datasets used here were hard to follow: basically just a few sentences described at a high level. Maybe to somebody more familiar with these techniques will understand these modifications fully, but to me it was hard to follow whether something methodologically significant had been added to the model, or whether the technique was just a few straightforward modifications to an existing method to adapt it to the task. I'll defer to others for comments on this aspect.\n\nOther than that the example results shown are quite compelling (both qualitatively and quantitatively), and the experiments are fairly detailed.\n", "There are also some relevant works that are missing in the references such as:\nUnsupervised Text Style Transfer using Language Models as Discriminators by Yang etc al.", "Thanks for the interesting work. It'd be nice to see an empirical comparison of this work to (Hu el al., 2017) which has released code here: https://github.com/asyml/texar/tree/master/examples/text_style_transfer. Based on my experience, (Hu el al., 2017) is usually a strong baseline on many datasets." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, -1, -1 ]
[ "H1xk7NH9pX", "rylWfjH0hQ", "S1ll1KQ5hX", "S1ll1KQ5hX", "H1lXb9okiX", "B1la-jxRhQ", "SkgGgYEt6m", "Hkgb4vEFpQ", "iclr_2019_H1g2NhC5KQ", "iclr_2019_H1g2NhC5KQ", "iclr_2019_H1g2NhC5KQ", "iclr_2019_H1g2NhC5KQ", "rylWfjH0hQ", "iclr_2019_H1g2NhC5KQ" ]
iclr_2019_H1g4k309F7
Wasserstein Barycenter Model Ensembling
In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein (W.) barycenters. Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings. Using W. barycenters to find the consensus between models allows us to balance confidence and semantics in finding the agreement between the models. We show applications of Wasserstein ensembling in attribute-based classification, multilabel learning and image captioning generation. These results show that the W. ensembling is a viable alternative to the basic geometric or arithmetic mean ensembling.
accepted-poster-papers
The paper proposes a novel way to ensemble multi-class or multi-label models based on a Wasserstein barycenter approach. The approach is theoretically justified and obtains good results. Reviewers were concerned with time complexity, and authors provided a clear breakdown of the complexity. Overall, all reviewers were positives in their scores, and I recommend accepting the paper.
train
[ "HJl9L6zjpX", "HJgJphGop7", "Syg7gifsam", "Byxz65zja7", "SyxbnDMoaX", "rkgY3_xa2X", "H1xL7X6i3X", "HklQY2Q53X" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their positive feedback and their questions that we answer in the following:\n\n1) REVIEWER:  Please define the acronyms before using them, for instance DNN (in first page, 4th line), KL (also first page), NLP, etc. \n\nAUTHORS: Thanks, we have implemented that in the revision. \n\n  2) REVIEWER: In practice, when ensembling different methods, the geometric and arithmetic mean are not computed with equal weights ($\\lambda_l$ in Definition 1). Instead, these weights are computed as the optimal values for a given small dev-set. It would be interesting to see how well does the method compare to these optimal weighted averages, and also if it improves is we also compute the optimal $\\lambda_l$ for the Wasserstein barycenter. \n\nAUTHORS: Thanks for the suggestion. We added Appendix B.2 to experiment with this in the multi-label setting, where we set $lambda_l = mAP_{l}/ sum_l mAP_{l}$, i.e lambda_l is proportional to the accuracy of the individual model on the validation set. This is an intuitive approach as one would like to trust more models with higher accuracies on the development set.  This indeed helps all ensembling techniques and maintains the advantage for W. Barycenter on alternative such as arithmetic and geometric as can be seen in Table 11 in Appendix B.2. \n\n3) REVIEWER: How computationally expensive are these methods?  \n\nAUTHORS: Please refer to the reply to AnonReviewer2 above (first point, or to the revised version Appendix E ). \n\n4) REVIEWER: So the output of the ensembling method is a point in the word embedding space, but we know that not all points in this space have an associated word, thus, how are the words chosen?\n\nAUTHORS: The output of Wasserstein barycenter is still a probability vector (histogram) and not a word embedding. The inputs to W. barycenters are histograms of the models we want to ensemble, we use the geometry of word embeddings to define the Wasserstein distance, the output of the W. ensembling is still an histogram.  The word is chosen as the one with maximal probability for classification and for multi-label prediction we take the top-k (given a varying threshold as defined by average mean precision, or area under the curve).  We use beam search or random beam search for captioning. \n \n5) REVIEWER: The image captioning example of Fig.4 is very interesting (although the original image should be added to understand better the different results), can you show also some negative examples? That is to say, when is the Wassertein method is failing but not the other methods.\n\n  AUTHORS: We show the image now  thanks for pointing that out.  We have added in Figure 5 (Appendix A), histogram views of how W. Barycenter changes with epsilon for the same example. As epsilon goes to zero we recover geometric mean, as epsilon goes to infinity (entropic regularized ) W.  barycenter becomes close to a uniform distribution, as the probability mass is spread equally across all words. Intermediate values of epsilon allow for a better transfer of mass between semantically related bins, where the radius of neighborhood is defined by epsilon.   \n\n6) REVIEWER: The paper is not easy to read. Ensembling methods are normally applied to the output of a classifier or a regression method, so it is not evident to understand why the 'underlying geometry' is in the word embedding space (page 2 after the Definition 1). I think this is explained in the second paragraph of the paper, but that paragraph is really not clear. I assume that is makes sense to use the word-embedding space for the image caption generation or other ML tasks where the output is a word, but I am not sure how this is used in other cases. \n\nAUTHORS: In the previous version of the paper, we presented an example using word embeddings for building K as a mean to give some intuition about Wasserstein barycenter. Since it may have been confusing, the revised version clarifies that word embeddings are just an example of how to build the cost matrix C (and ultimately K) and that one can define C in different ways depending on the task at hand. For instance, for two semantic classes i, j with respective word embeddings x_i, x_j,  C_{ij}=nor{x_i-x_j}^2 and K_{ij}=e^{-c_{ij}/epsilon}. For other tasks, one may just define K_{ij} as the graph of similarity between semantic classes that can be inferred (for instance) from a knowledge graph, or from wordnet, or from co-occurences, etc. In Section 3 we have now a paragraph \"Wasserstein Ensembling in Practice\" that discusses Machine learning tasks and their corresponding kernel matrix. Table 12 in the Appendix expands on more tasks and their corresponding K. \n", "We thank the reviewer for their positive feedback and address their main concerns:\n\n1) REVIEWER: Can the authors comment on the time-complexity of the proposed framework in comparison with its baseline methods?\n\nAUTHORS: We added Appendix E to discuss the computational aspects and time complexities. Appendix E.1 discusses the time complexity of a vanilla implementation and how to improve it. \n\nComputing Wasserstein barycenter of histograms of size N, for m models,using Maxiter iterations, needs O(Maxiter m N^2 ). Since we have m matrix vector products that costs O(N^2) each. We found Maxiter=5 was enough for convergence. As most of the complexity comes from m, and N, we will then ignore the number of iterations in the discussion. \n\nAlg.1 and Alg.2 are simply parallelizable using m machines which would reduce the cost to O(N^2).\nNote that Kv can be speed up further using low rank approximation of K. K is indeed a kernel matrix that has some low rank structure that can be exploited K =\\Phi \\Phi^{\\top} where Phi \\in R^{N\\time k}, k<< N (k=polylog(N)), and hence the product can be computed in O(N k) . These improvements have been discussed in a recent paper [1]. Hence, using parallelization and low rank approximation (entropic reg. ) W. barycenter can be computed in near linear time in N. \nNote that using GPU implementation and batching would further improve the overall time complexity as discussed in [2]. \n \nNote that we have implemented barycenter in pytorch to enable a GPU computation since most of the time complexity comes from matrix vector product. Appendix E.2 gives timing experiments on the multi-label task for a vanilla implementation where it is comparedto arithmetic and geometric means (we did not use parallelization, batching nor low rank approximation). We get an average of ~4ms/image for computing the barycenter using a total of m= 8 base models. This is not a big overhead and we are confident that by using the tricks mentioned above (parallelization, batching and low rank approx.) this overhead can be further reduced. However this is beyond the scope of this paper and it is an active area of research on its own. \n\n[1] J. Altschuler, F. Bach, A. Rudi, and J.Weed. Approximating the Quadratic Transportation Metric in\nNear-Linear Time. ArXiv e-prints , 2018. \n[2] Interpolating between Optimal Transport and MMD using Sinkhorn Divergence. Feydy et al .\n\n\n2) REVIEWER: Moreover, is it possible to evaluate the uncertainty of predictions with the proposed framework?\n\nAUTHORS: The uncertainty estimation can be done also with Wasserstein barycenter using bootstrapping. For instance,for image classification we can do K crops and feed them to the m models, and obtain K values for Wasserstein barycenters, then get uncertainty on the Wasserstein barycenter. One other approach is instead of ensembling deterministic network, one can ensemble stochastic networks (that have noise in their units) and then report means and variances of Wasserstein barycenters. \nAnother elegant way of modeling uncertainty in Wasserstein barycenters is by using Gaussian processes or deep Gaussian processes [3, and references therein]. Wasserstein barycenter are well defined between Gaussian processes and is also a Gaussian process [4] . In summary,one can train individual models that are ( deep ) Gaussian processes and ensemble them using Wasserstein barycenter. The uncertainty modeling will carry on to the ensemble since it is also a Gaussian process. \n\n[3] Uncertainty in deep learning. Thesis Yarin Gal. \n[4] Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes Mallasto et al. https://papers.nips.cc/paper/7149-learning-from-uncertain-curves-the-2-wasserstein-metric-for-gaussian-processes.pdf. \n\n3) REVIEWER: Relation to Frogner, Zhang et. al\n\nAUTHORS: We have indeed cited Frogner et al. Their work was the first to propose end-to-end learning of deep networks with Wasserstein loss -- nevertheless their algorithm is not an ensembling technique. Their algorithm is a training algorithm, ours is a test time algorithm: Frogner et al. learn a single network trained with the Wasserstein loss in an end-to-end fashion. Our method allows the ensembling of many models even if they were defined on different output domains (see for example Section 1 where we ensemble attributes to categories)-- this is not achievable in Frogner et al.framework, and is not one of the purposes of their method. Note that our networks are pretrained, and the ensembling is not limited to deep networks as in Frogner et al, we can be ensembling random forests for instance with Wasserstein barycenters. \n", "We thank the reviewer for their positive and encouraging feedback and address here their main concerns. \n\n1) REVIEWER: Experimental results are convincing, although sometimes poorly presented. Figure are presented in a sloppy way, … discussion on what K should be … beyond what's proposed in p.8. \n\nAUTHORS: We greatly improved in the revised paper, the presentation of experimental results (figures , tables etc). We also clarified how K can be designed for several machine learning tasks that can benefit from W. ensembling. In Section 3, we now have a paragraph \"Wasserstein Ensembling in Practice\" that discusses this topic. Table 1 discusses multi-class, multi-label and their corresponding K (K is a square matrix, K can be defined through word embeddings or a knowledge graph or confusion/ co-occurrence matrix, a graph constructed based of synonyms or word-net, scene graphs, etc. ) where the histograms are defined on the same domain (source = target). \nWe also discuss in Table 1 the \"Attributes to Categories\" case that unbalanced barycenters allow when source and target domain are different (K rectangular). As suggested by the reviewer, in order to go beyond the tasks discussed in the paper, we added in the Appendix in Table 12 two additional tasks: Vocabulary expansion (where base models are NLP models defined on a vocabulary and we would like the ensemble to be defined on a larger vocabulary; K is defined through word embeddings ), Multilingual fusion and translation (base models are NLP models defined on different languages, and the ensemble is defined on yet another language; For this case we need multilingual word embeddings [1] to build K).\n\n[1] Massively multilingual word embeddings. Ammar et al. \n\n2) REVIEWER: in remark 1 you mention that \"as epsilon->0 the solution of Benamou et al. converges to a geometric mean. I would have thought that, on the contrary, the algorithm would have converged to the solution of the true W Barycenter…. could you please develop on that in a future version?\"\n\nAUTHORS: We added Appendix D to discuss this in details. We give now a full proof in Lemma 1 in Appendix D.1 addressing why Algorithm 1 (Benamou et al.) for balanced barycenters converges to geometric mean when epsilon goes to zero. In summary, Algorithm 1 is a fixed point algorithm and when K is identity, it does not recover the non-regularized balanced Wasserstein barycenter since no side information is used and all bins are independent. When epsilon goes to zero, K approaches identity, and the fixed point algorithm diverges to geometric mean. This does not say that entropic regularization as epsilon goes to zero does not converge to true barycenter -- this was indeed proved in [3] (gamma convergence) without giving an algorithm that achieves this convergence. We just say that the fixed point of algorithm (Alg. 1) does not give such a guarantee. \n\nNevertheless, when epsilon is positive, it is known that the Sinkhorn divergence interpolates between the MMD (maximum mean discrepancy ) distance and Wasserstein distance [2], and hence one can see the barycenter obtained as interpolation between the MMD barycenter and the (marginal reg) true barycenter (as discussed in Appendix D.1). As we emphasized in the paper, non-zero epsilon is important to control the entropy of the barycenter. \n\n[2] Interpolating between Optimal Transport and MMD using Sinkhorn Divergence. Feydy et al .\n[3] Convergence of entropic schemes for optimal transport and gradient flows. Carlier et al. \n", "\n3) REVIEWER: Is this valid only because \\lambda here is finite? \n\nAUTHORS : Note that this proof of convergence of the fixed point algorithm to geometric mean holds only for balanced barycenter (Alg. 1) and not for unbalanced ones (Alg. 2) so lambda is not in play here. Appendix D.2 addresses the case of unbalanced barycenter, where the effect of lambda is more subtle (interpolation between the Hellinger distance and optimal transport (Chizat et al.) ). \n\n4) REVIEWER: On the contrary, what would happen when eps -> infty then, and K = ones?\n\nAUTHORS : When epsilon goes to infinity in the balanced barycenter case , it is known that Sinkhorn divergence (entropic regularized Wasserstein and normalized with respect to the two measures) converges to MMD distance [2 and citations therein]. K=ones means that all semantic classes become related (the contrary of epsilon -> 0 where all bins are independent) which will result in diffusive and large spread in the barycenter histogram. We added Fig 5 in the Appendix A to show a histogram view of how W. Barycenter changes with epsilon (from geometric mean for eps 0, to an almost uniform distribution for eps infinity. Intermediate values allow semantic sharing of mass in the neighborhood defined by eps).\n\n5) REVIEWER: GW poor naming --->Thanks. we changed it to W_{unb} \n\\lambda and \\lambda_{ell} clash ---> while we agree we decided to keep this notation to keep consistent with notations in Chizat et al. \n", "We thank the reviewers for their positive and encouraging feedback. We have uploaded a revised version of the paper to address the questions of the reviewers, where we clarified and added additional experiments at the request of the reviewers:\n\n1) Reviewer 1 and 3: We have clarified that for designing K one is not limited to building it through word embeddings, as explained in the added Table 1 in the paper and in Table 12 in the appendix, for a machine learning task one can design a K given a side information at hand ( with an emphasis that W. Barycenter allows to ensemble histograms defined on different domains.)\n\n2) We added in Appendix B.2 at the request of reviewer 1 an experiment to assess W. Barycenter performance with non uniform weights, that indeed improves accuracy but the advantage for W. Barycenter is maintained. \n\n3) We added Appendix D at the request of reviewer 3 to explain how algorithm 1 behaves as epsilon approaches zero. We provide now a full proof of convergence for alg. 1 of Benamou et al. to geometric mean as K goes to identity. \n\n4) We added Appendix E at the request of reviewer 1 and 2 about the time complexity of W. Barycenter and discussed how to improve it using GPU, batching, parallelization, and low rank approximations (Appendix E.1 ). We also reported timing experiments in multi-label ensembling (Appendix E.2).  \n \nWe hope those additions and clarifications in the new version address the concerns of the reviewers and improve their overall assessment of the paper. ", "This paper has a simple message. When predicting families (weight vectors) of labels, it makes sense to use an ensemble of predictors and average them using a Wasserstein barycenter, where the ground metric is defined using some a priori knowledge on the labels, here usually distances between word embeddings or more elaborate metrics (or kernels K, as described in p.8). Such barycenters can be easily computed using an algorithm proposed by Benamou et al. 18. When these histograms are not normalized (e.g. their count vectors do not sum to the same quantity) then, as shown by Frogner, Zhang et al, an alternative penalized formulation of OT can be studied, solved numerically with a modified Sinkhorn algorithm, which also leads to a simple W barycenter algorithm as shown by Chizat et al.\n\nThe paper starts with a lot of reminders, shows some simple theoretical/stability results on barycenters, underlines the role of the regularization parameter, and then spends a few pages showing that this idea does, indeed, work well to carry out ensemble of multi-tag classifiers.\n\nThe paper is very simple from a methodological point of view. Experimental results are convincing, although sometimes poorly presented. Figure are presented in a sloppy way, and a more clear discussion on what K should be used would be welcome, beyond what's proposed in p.8. For these reasons I am positive this result should be published, but I'd expect an additional clarification effort from the authors to reach a publishable draft.\n\nminor comment:\n- in remark 1 you mention that as epsilon->0 the solution of Benamou et al. converges to a geometric mean. I would have thought that, on the contrary, the algorithm would have converged to the solution of the true (marginal-regularized) W barycenter. Hence the result your propose is a bit counter-intuitive, could you please develop on that in a future version? Is this valid only because \\lambda here is finite? on the contrary, what would happen when eps -> infty then, and K = ones?\n\n- GW for generalized Wasserstein is poor naming. GW usually stands for Gromov-Wasserstein (see Memoli's work).\n\n- \\lambda and \\lambda_l somewhat clash...", "The paper proposes a framework based on Wasserstein barycenter to ensemble learning models for a multiclass or a multilabel learning problem. The paper has theoretically shown that the model ensembling using Wasserstein barycenters preserves accuracy, and has a higher entropy than the individual models. Experimental results in the context of attribute-based classification, multilabel learning, and image captioning generation have shown the effectiveness of Wasserstein-based ensembling in comparison to geometric or arithmetic mean ensembling.\n\nThe paper is well-written and the experiments demonstrate comparable results. However, the idea of Wasserstein barycenter based ensembling comes at the cost of time complexity since computation of Wasserstein barycenter is more costly than geometric or arithmetic mean. An ensemble is designed to provide lower test error, but also estimate the uncertainty given by the predictions from different models. However, it is not clear how Wasserstein barycenter based ensembling can provide such uncertainty estimate. \n\nCan the authors comment on the time-complexity of the proposed framework in comparison with its baseline methods? Moreover, is it possible to evaluate the uncertainty of predictions with the proposed framework?\n\nIn the context of multilabel learning, Frogner et. al. (2015, https://arxiv.org/abs/1506.05439) suggested using Wasserstein distance as a loss function. In the model, they also leverage the side information from word embedding of tag labels. Is the proposed ensembling framework comparable with theirs?\n\nIn short, this paper can provide a useful addition to the literature on model ensembling. Though the proposed framework does improve the performance of predictions in several applications, I am still not fully convinced on time-complexity introduced when computing Wasserstein barycenters.", "Paper overview: Model ensembling techniques aim at improving machine learning model prediction results by i) executing several different algorithms on the same task and ii) solving the discrepancies in the responses of all the algorithms, for each task. Some common methods are voting and averaging (arithmetic or geometric average) on the results provided by the different algorithms. \nSince averaging amounts to computing barycenters with different distance functions, this paper proposes to use the Wassertein barycenter instead of the L2 barycenter (arithmetic average) or the extended KL barycenter (geometric mean). \n\nRemarks, typos and experiences that would be interesting to add: \n 1) Please define the acronyms before using them, for instance DNN (in first page, 4th line), KL (also first page), NLP, etc. \n 2) In practice, when ensembling different methods, the geometric and arithmetic mean are not computed with equal weights ($\\lambda_l$ in Definition 1). Instead, these weights are computed as the optimal values for a given small dev-set. It would be interesting to see how well does the method compare to these optimal weighted averages, and also if it improves is we also compute the optimal $\\lambda_l$ for the Wasserstein barycenter. \n 3) How computationally expensive are these methods? \n 4) So the output of the ensembling method is a point in the word embedding space, but we know that not all points in this space have an associated word, thus, how are the words chosen?\n 5) The image captioning example of Fig.4 is very interesting (although the original image should be added to understand better the different results), can you show also some negative examples? That is to say, when is the Wassertein method is failing but not the other methods.\n\n\nPoints in favor: \n 1)Better results: The proposed model is not only theoretically interesting, but it also improves the arithmetic and geometric mean baselines.\n 2) Interesting theoretical and practical properties: semantic accuracy, diversity and robustness (see Proposition 1). \n\nPoints against: The paper is not easy to read. Ensembling methods are normally applied to the output of a classifier or a regression method, so it is not evident to understand why the 'underlying geometry' is in the word embedding space (page 2 after the Definition 1). I think this is explained in the second paragraph of the paper, but that paragraph is really not clear. I assume that is makes sense to use the word-embedding space for the image caption generation or other ML tasks where the output is a word, but I am not sure how this is used in other cases. \n\nConclusion: The paper proposes a new method for model assembling by rethinking other popular methods such as the arithmetic and geometric average. It also shows that it improves the current methods. Therefore, I think it presents enough novelties to be accepted in the conference." ]
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HklQY2Q53X", "H1xL7X6i3X", "rkgY3_xa2X", "rkgY3_xa2X", "iclr_2019_H1g4k309F7", "iclr_2019_H1g4k309F7", "iclr_2019_H1g4k309F7", "iclr_2019_H1g4k309F7" ]
iclr_2019_H1g6osRcFQ
Policy Transfer with Strategy Optimization
Computer simulation provides an automatic and safe way for training robotic control policies to achieve complex tasks such as locomotion. However, a policy trained in simulation usually does not transfer directly to the real hardware due to the differences between the two environments. Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification. In this paper, we present a different approach that leverages domain randomization for transferring control policies to unknown environments. The key idea that, instead of learning a single policy in the simulation, we simultaneously learn a family of policies that exhibit different behaviors. When tested in the target environment, we directly search for the best policy in the family based on the task performance, without the need to identify the dynamic parameters. We evaluate our method on five simulated robotic control problems with different discrepancies in the training and testing environment and demonstrate that our method can overcome larger modeling errors compared to training a robust policy or an adaptive policy.
accepted-poster-papers
The paper presents quite a simple idea to transfer a policy between domains by conditioning the orginal learned policy on the physical parameter used in dynamics randomization. CMA-ES then finds the best parameters in the target domain. Importantly, it is shown to work well, for examples where the dynamics randomization parameters do not span the parameters that are actually changed, i.e., as is likely common in reality-gap problems. A weakness is the size of the contribution beyond UPOSI (Yu et al. 2017), the closest work. The authors now explicitly benchmark against this, with (generally) positive results. AC: It would be ideal to see that the method does truly help span the reality gap, by seeing working sim2real transfer. Overall, the reviewers and AC are in agreement that this is a good idea that is likely to have impact. Its fundamental simplicity means that it can also readily be used as a benchmark in future sim2real work. The AC recommend it be considered for oral presentation based on its simplicity, the importance of the sim2real problem, and particularly if it can be demonstrated to work well on actual sim2real transfer tasks (not yet shown in the current results).
train
[ "HJlhOgWZp7", "BJeoB_Dn07", "Hkg6Xry5h7", "BkxgDQGjC7", "SJxgQshk0m", "SyevB3PopX", "SyeODTvoTm", "SkgOw4vjam", "SyefcQPoTQ", "rkel69Xq3X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces a simple technique to transfer policies between domains by learning a policy that's parametrized by domain randomization parameters. During transfer CMA-ES is used to find the best parameters for the target domain.\n\nQuestions/remarks:\n- If I understand correctly, a rollout of a policy during transfer (i.e. an episode) contains 2000 samples. Hence, 50000 samples in the target environment corresponds to 25 episodes. Is this correct? Does fine-tuning essentially consists of performing 25 rollouts in the target domain?\n- It seems that for some tasks, there is almost no finetuning happening whereas SO-CMA still outperforms domain randomization (Robust) significantly? How can this be explained? For example, the quadruped task (Fig 6a) has no improvement for the SO-CMA method, yet it is significantly better than the domain randomization result. It seems that during the first episodes of finetuning, domain randomization and SO-CMA should be nearly equivalent (since CMA-ES will be randomly picking parameters mu). A very similar situation can be seen in Fig 5a\n- Following up on my previous question: fig 4a does show the expected behavior (domain randomization and SO-CMA starting around the same value). However, in this case your method does not outperform domain randomization. Any idea as to why this is the case?\n- It's difficult to understand how good/bad the performance of the various methods are without an oracle for comparison (i.e. just run PPO in the target environment). \n- It seems that the algorithm in this work is almost identical to Hardware Conditioned Policies for Multi-Robot (Tao Chen et al. NIPS 2018), specifically section 5.2 in that paper seems very similar. Please comment.\n\nMinor remarks:\n- fig 5.a y-axis starts at 500 instead of 0.\n- The reward for halfcheetah seems low, but this might be due to the custom setup.", "Thanks for your detailed reply and revision. I think this strengthens this paper and I'd happily kick my rating up a notch to a 6 or 7. I'm not sure if I can still change my official rating, but I'm assuming the meta-reviewer will review this.\n\nIn summary, I like the simplicity of this paper. This approach seems to perform on par with or better than more complicated meta-learning setups and is worthy of publication (it could at least serve as a good benchmark).", "The authors propose a policy transfer scheme which in the source domain simultaneously learns a family of policies parameterised by dynamics parameters and then employs an optimisation framework to select appropriate dynamics parameters based on samples from the target domain. The approach is evaluated on a number of simulated transfer tasks (either transferring from DART to MuJoCo or by introducing deliberate model inaccuracies).\n\nThis is interesting work in the context of system identification for policy transfer with an elaborate experimental evaluation. The policy learning part seems largely similar to that employed by Yu et al. 2017 (as acknowledged by the authors). This makes the principal contribution, in the eyes of this reviewer, the optimisation step conducted based on rollouts in the target domain. While the notion of optimising over the space of dynamics parameters is intuitive the question arises whether this optimisation step makes for a substantive contribution over the original work. This point is not really addressed in the experimental evaluation as benchmarking is performed against a robust and an adaptive policy but not explicitly against the (arguably) most closely related work in Yu et al. It could be argued, of course, that Yu et al. essentially use adaptive policy generation but they do explicitly learn dynamics parameters based on recent history of actions and observations. An explicit comparison therefore seems appropriate (or alternatively a discussion of why it is not required).\n\nAnother point which would, in my view, add significant value is explicit discussion of the baseline performances observed in the various experiments. For example, in the hopper experiment (Sec 5.2) the authors state that the baseline methods were not able to adapt to the new environment. Real value could be derived here if the authors could elaborate on why this is the case. The same applies in Sec 5.3-5.6. \n\n(I would add here, as an aside, that I thought the notion in Sec 5.6 of framing the learning of policies for handling deformable objects as a transfer task based on rigid objects to be a nice idea. And not one this reviewer has come across before - though this could merely be a reflection of limited familiarity with the literature).\n\nThe experimental evaluation seems thorough with the above caveat of a seemingly missing benchmark in Yu et al. I would also encourage the authors to add more detail in the experimental section in the main text specifically with regards to number of trials run to arrive at variances in the figures as well as what metric these shaded areas actually signify. \n\nA minor point: the J in equ 1 seems (to me at least) undefined. I suspect that it signifies the expected cumulative reward and was meant to be introduced in Sec 3 where the J may have been dropped from the latex?\n\nIf the above points were addressed I think this would make a valuable and interesting contribution to the ICLR community. As it stands I believe it is marginally below the acceptance threshold.\n\n[ADDENDUM: given the author feedback and addition of the benchmark experiments requested I have updated my score.]\n\n\nPros:\n———\n- interesting work\n- accessible\n- effective\n- thorough evaluation (though potentially missing a key benchmark)\n\nCons:\n———\n- potentially missing a key benchmark (and therefore seems somewhat incremental)\n- only limited insight offered by the authors in the discussion of the experimental results\n- some more details needed with regards to the experimental setup\n", "The revisions make the paper quite a bit stronger and more complete. I'm maintaining my rating of 7-Accept.", "The detailed reviews are appreciated, as are the author's detailed replies. \nAs a next step, could the reviewers please advise as to whether the replies have influenced your evaluation \nand your score for the paper? Thank you in advance!\nNote: to see the revision differences, select \"Show Revisions\" on the review page, and then select the check-boxes for the two versions you wish to compare. \n\n-- area chair", "We thank the reviewer for the insightful comments! Below we discuss the questions and comments by the reviewer. We have also revised the text to address the comments.\n\n1. Rollout number during fine-tuning\nDuring the fine-tuning stage, the policies interact with the target environment for 50,000 steps (corresponding to the results in Figure 2, 4 (a), 5 (a) and 6). In the case of fine-tuning Robust, Hist and UPOSI, we run PPO with 2,000 samples at each iteration, resulting in 50 iterations of PPO. \n\nIn terms of the length of each rollout or trajectory, it has a maximum of 1,000 steps while the actual rollouts might be shorter due to early terminations. \n\nIn our experiments, the fine-tuning phase in general takes between 100-300 rollouts depending on the task. We have also revised the related text (Appendix B.4) to make this more clear.\n\n2. SO-CMA sometimes perform well without fine-tuning\nThe reviewer’s concern about the SO-CMA sometimes achieving good performance with only one iteration is well taken. Upon further investigation, we think this is partly due to that the initial sampling distribution for CMA is chosen to be a Gaussian with the center of the mu domain as mean and a stdev of 0.25 (we use a mu domain of length 1 in each dimension). For the quadruped example, it turns out that the optimal solution of mu is close to the center of the mu domain and thus even in the first iteration of CMA, it might draw a sample that performs well. To validate this, we re-ran SO-CMA for the quadruped and the halfcheetah with CMA initial distribution to be a Gaussian with its mean randomly sampled and stdev be 0.5. This results in a more reasonable performance curve (as shown in Figure 5(a) and 6(a)) where the initial guess of CMA is sub-optimal and through the iterative optimization process it finds better solutions.\n\n3. Performance of Robust in walker2d example\nFor the walker2d example, fine-tuning a robust policy indeed achieved comparable performance to SO-CMA. We hypothesize that this is because Robust was able to discover a robust bipedal running gait that works near-optimally for a large range of different dynamic parameters mu. However, when the optimal controller is more sensitive to mu, Robust policies may learn to use over-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper).\n\nWe do note that the fine-tuning process of the baseline methods relies on having a dense reward signal. In practice, one may only have access to a sparse reward signal in the target environment. Our method, using CMA, naturally handles sparse rewards and thus the performance gap between our method and the baseline methods will likely to grow if a sparse reward is used.\n\nWe have added a new section that discusses the performance of baseline methods (Section 6). We refer the reviewer to the revised text for more details.\n\n4. Oracle in the target environment\nWe have trained oracle agents for our examples and added to the results (as seen in Figure 2-5). We trained the oracles for hopper, walker2d and halfcheetah environment for 3 random seeds with 1 million samples using PPO as in [1]. For the quadruped robot, we trained the oracle for 5 million samples as in [2]. Our method is able to achieve comparable or even better performance than the oracle agents.\n\n5. Comparison to Tao Chen et al.\nWe thank the reviewer for pointing out the work by Tao Chen et al. [3], which we missed during literature search. It is very interesting and highly relevant to ours. The most relevant part of the algorithm by Tao Chen et al is the HCP-I policy, where a latent variable representing the variations is trained along with the neural net weights using reinforcement learning. During the transfer stage, HCP-I is fine-tuned in the target environment with another RL process. \n\nOur method differs from HCP-I in two aspects. First, our policy takes the dynamic parameters as input, while HCP-I learns a latent representation of them. Second, during the transfer of the policy, we search in the low-dimensional mu space using CMA, instead of fine-tuning the entire neural network. Learning a latent representation of the variations in the dynamics can be more flexible, while searching in the mu space is more sample efficient and allows sparse reward when methods like CMA is used. It is an interesting future direction to see whether HCP-I can overcome large dynamics discrepancies like the ones in our examples and if using CMA for identifying the latent variables in HCP-I can result in a more sample-efficient transfer algorithm. \n\nWe have added the HCP-I related discussions in related works section and conclusion section.\n\n[1] Schulman, John, et al. \"Proximal policy optimization algorithms.\".\n[2] Tan, Jie, et al. “Sim-to-Real: Learning Agile Locomotion For Quadruped Robots.”\n[3] Chen, Tao, et al. \"Hardware Conditioned Policies for Multi-Robot Transfer Learning.\" NIPS, 2018.", "We have revised the paper based on the reviewers' comments. The main changes to the initial paper are the following:\n\n- Added comparison to Yu et al. 2017 in Experiments (Figure 2-5).\n- Added comparison to oracle agents, which are agents trained directly in the target environments (Figure 2-6).\n- Re-ran SO-CMA for the single target example of half cheetah and quadruped to account for the initialization bias in the CMA-ES experiments (Figure 5, 6).\n- Added a discussion section for a more detailed discussion of the results from different methods.\n- Revised Related Work and Conclusion sections to include the work of Tao Cheng et al. 2018\n- Fixed typos and figure inconsistencies as pointed out by the reviewers.", "We thank the reviewer for the valuable feedback!\n \nWe share the reviewer’s concern that requiring explicit randomization parameters as inputs to the policy can be limiting for some applications. It is an interesting and important future direction to investigate how we can lift this limitation. One possible way is to use the method proposed by Eysenbach et al. [1], where a diverse set of skills is learned by maximizing how well a discriminative model can distinguish between different policies. Another possibility is to use the method in the work by Chen et al [2], as pointed out by Reviewer 2. They learned a latent representation of the environment variations by optimizing a latent input to the policy during the training.\n\n\n[1] Eysenbach, Benjamin, et al. \"Diversity is All You Need: Learning Skills without a Reward Function.\" arXiv preprint arXiv:1802.06070 (2018).\n[2] Chen, Tao, et al. \"Hardware Conditioned Policies for Multi-Robot Transfer Learning.\" NIPS, 2018.", "We thank the reviewer for the thoughtful comments! We have revised the paper to address the reviewer’s concerns, as detailed below.\n\n1. Comparison to Yu et al. 2017\nWe have added the comparison to UPOSI for the hopper, walker and halfcheetah examples (Figure 2, 3, 4 and 5). In general UPOSI transfers better than Hist, as expected. Our proposed method was able to notably outperform UPOSI in the hopper and walker example, while the results for halfcheetah example are comparable.\n\n2. Discussion about baselines performances.\nWe have added a new section (Section 6) that discusses the performance of the baseline methods for each example. Please refer to the revised text for more details. The related text is copied here for easy access:\n\n“We hypothesize that the large variance in the performance of the baseline methods is due to their sensitivity to the type of task being tested. For example, if there exists a robust controller that works for a large range of different dynamic parameters mu in the task, such as a bipedal running motion in the Walker2d example, training a Robust policy may achieve good performance in transfer. However, when the optimal controller is more sensitive to mu, Robust policies may learn to use overly-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper). On the other hand, if the target environment is not significantly different from the training environments, UPOSI may achieve good performance, as in HalfCheetah. However, as the reality gap becomes larger, the system identification model in UPOSI may fail to produce good estimates and result in non-optimal actions. Furthermore, Hist did not achieve successful transfer in any of the examples, possibly due to two reasons: 1) it shares similar limitation to UPOSI when the reality gap is large and 2) it is in general more difficult to train Hist due to the larger input space, so that with a limited sample budget it is challenging to fine-tune Hist effectively.\n\nWe also note that although in some examples certain baseline method may achieve successful transfer, the fine-tuning process of these methods relies on having a dense reward signal. In practice, one may only have access to a sparse reward signal in the target environment, e.g. distance traveled before falling to the ground. Our method, using an evolutionary algorithm (CMA), naturally handles sparse rewards and thus the performance gap between our method (SO-CMA) and the baseline methods will likely be large if a sparse reward is used.“\n\n3. Experimental setup.\nWe ran each trial with 3 random seeds and report the mean and one standard deviation in the plots. We have modified the first paragraph of the experiments section to emphasize this.\n\n4. J in eq 1 undefined.\nThanks for spotting this! It was indeed due to a typo in the latex file that dropped J in section 3. This has been fixed in the revision.\n", "This paper presents a novel approach for adapting a policy learned with domain randomization to the target domain. The parameters for domain randomization are explicitly used as input to the network learning the policy. When run in the target domain, CMA-ES is used to search over these domain parameters to find the ones that lead to the policy with the best returns in the target domain.\n\nThis approach is a novel one in the space of domain randomization and sim2real work. The results show that it improves over learning robust policies and over one version of doing an adaptive policy (feedforward network with history input). This approach could\n\nThe paper is well written, clearly explained, has clear results, and also explains and evaluates alternate design choices in the appendix.\n\nPros:\n- Demonstrated transfer across simulated environments\n- Outperforms basic robust and adaptive alternatives\n- Straightforward approach\nCons:\n- Requires explicit domain randomization parameters as input to network. This restricts it from applying to work where the simulator is learned rather than parameterized in this way. \n" ]
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_H1g6osRcFQ", "SyevB3PopX", "iclr_2019_H1g6osRcFQ", "SJxgQshk0m", "iclr_2019_H1g6osRcFQ", "HJlhOgWZp7", "iclr_2019_H1g6osRcFQ", "rkel69Xq3X", "Hkg6Xry5h7", "iclr_2019_H1g6osRcFQ" ]
iclr_2019_H1gKYo09tX
code2seq: Generating Sequences from Structured Representations of Code
The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present code2seq: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as general state-of-the-art NMT models. An interactive online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.
accepted-poster-papers
Overall this paper presents a few improvements over the code2vec model of Alon et al., applying it to seq2seq tasks. The empirical results are very good, and there is fairly extensive experimentation. This is a relatively crowded space, so there are a few natural baselines that were not compared to, but I don't think that comparison to every single baseline is warranted or necessary, and the authors have done an admirable job. One thing that still is quite puzzling is the strength of the "AST nodes only baseline", which the authors have given a few explanations for (using nodes helps focus on variables, and also there is an effect of combining together things that are close together in the AST tree). Still, this result doesn't seem to mesh with the overall story of the paper all that well, and again opens up some obvious questions such as whether a Transformer model trained on only AST nodes would have done similarly, and if not why not. This paper is very much on the borderline, so if there is space in the conference I think it would be a reasonable addition, but there could also be an argument made that the paper would be stronger in a re-submission where the above questions are answered.
test
[ "rkgOIg_yaX", "rkxpuJ9qn7", "r1xh1lBZAQ", "BkeUa0Vb0X", "HJeGbC7Z0m", "rylTcFveA7", "H1gSlHAA6m", "S1lPHIbk0m", "r1gdd_xy0X", "BJlBqinKT7", "ByxS4OqH67", "Hygxm-VB6Q", "HygBjl9m6Q", "BygxsiSMpX", "ByeVmSMGTX", "SJxc-Nzf6m", "r1g84NzfTX", "BJgTXkO53Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors present a method for generating sequences from code. To achieve this, they parse the code and produce a syntax tree. Then, they enumerate paths in the tree along leaf nodes. Each path is encoded via an bidirectional LSTM and a (sub)token-level LSTM decoder with attention over the paths is used to produce the output sequence. The authors compare their model with other models and show that it outperforms them on two code-to-sequence tasks. An ablation study shows how different components affect the model's performance.\n\nOverall, the task seems very interesting and the results positive. My main concern is wrt the novelty of this work: the novelty of the proposed model seems limited compared to code2vec (Alon 2018b). To my understanding the core idea of both code2vec and code2seq is similar in many respects. The core difference is that paths, instead of treated as single units (code2vec), they are treated as sequences whose representation is computed by an LSTM.\n\nTo understand the work better, additional evaluation seem be necessary:\n\nQ1: Could the authors compare code2seq with an ablation of a 2-layer BiLSTM where the decoder predicts the output as a single token (similar to the \"no decoder\" ablation of code2vec)?\n\nComparing this result to the \"no decoder\" ablation of code2seq will show the extent to which code2seq's performance is due to its code encoding or if code2vec with an LSTM decoder output would have sufficed.\n\nQ2: Using the BiLSTM and the Transformer as baselines seems reasonable but there are other existing models such as Tree LSTMs, Graph Convolutional Neural Networks [a] and TBCNNs [b] that could also be strong baselines which take tree structure into account. Have the authors experimented with any of those?\n\nQ3: I find the results in Table 1 very confusing when comparing them with those reported in Alon et al(2018b): code2vec achieves the best performance in Alon et al (2018b) but it seems to be performing badly in this work. The empirical comparisons to the same baseline methods used in Alon et al. (2018b) yield very different results. Why is that so? It would be worth performing an additional evaluation on the datasets of Alon et al (2018b) using the code2seq model. This would clarify if the results observed here generalize to other datasets.\n\nQ4: The strategy of enumerating paths in the tree seems to be problematic for large files of code. It is unclear how the authors (a) do an unbiased sample of the paths. Do they need to first enumerate all of them and pick at uniform? (b) since the authors pick $k$ paths for each sample, this may imply that the larger the tree, the worse the performance of code2seq. It would be useful to understand if code2seq suffers from this problem more/less than other baselines.\n\n[a] Kipf, T.N. and Welling, M., 2016. Semi-supervised classification with graph convolutional networks.\n[b] Mou, L., Men, R., Li, G., Xu, Y., Zhang, L., Yan, R. and Jin, Z., 2015. Natural language inference by tree-based convolution and heuristic matching.", "This paper introduces an AST-based encoding for programming code and\nshows the effectivness of the encoding in two different task of code\nsummarization:\n\n1. Extreme code summarization - predicting (generating) function name from function body (Java)\n2. Code captioning - generating a natural language sentence for a (short) snippet of code (C#)\n\nPros:\n- Simple idea of encoding syntactic structure of the program through random paths in ASTs\n- Thorough evaluation of the technique on multiple datasets and using multiple baselines\n- Better results than previously published baselines\n- Two new datasets (based on Java code present in github) that will be made available\n- The encoding is used in two different tasks which also involve two different languages\n\nCons:\n- Some of the details of the implementation/design are not clear (see some clarifying questions below)\n- More stats on the collected datasets would have been nice\n- Personally, I'm not convinced \"extreme code summarization\"\nis a great task for code understanding (see more comments below)\n\nOverall, I enjoyed reading this paper and I think the authors did a\ngreat job explaining the technique, comparing it with other baselines,\nbuilding new datasets, etc.\n\nI have several clarifying questions/points (in no particular order):\n\n* Can you provide some intuition on why random paths in the AST encode\n the \"meaning\" of the code? And perhaps qualitatively compare it with\n recording some other properties from the tree that preserve its\n structure more?\n\n* When you perform the encoding of the function body, one sample in a\n training step contains all the k (k = 200) paths and all the 2*k\n terminals (end of Section 2)? Or one path at a time (Section 3.2)?\n I'm guessing is the latter, but not entirely sure. Figure 3 could\n improve to make it clear.\n\n* Can you explain how you came up with k = 200? I think providing some\n stats on the dataset could be helpful to understand this number.\n\n* The results for the baselines - do you train across all projects?\n (As you point out, ConvAttention trained separately, curious whether\n it makes a difference for the 2 datasets med and large not present\n in the original paper).\n\n* I'm not sure I understand parts of the ablation study. In particular\n for point 1., it seems that instead of the AST, only the terminal\n nodes are used. Do you still use 200 random pairs of terminal? Is\n this equivalent to a (randomly shuffled) subset of the tokens in the\n program? Also could you explain why you do the ablation study on the\n validation set of the medium dataset? In fact, the caption of Table\n 3 says it's done on the dev set. This part was a bit confusing.\n\n* I would have liked to see more details on the datasets introduced,\n in particular wrt metrics that are relevant for training the model\n you describe (e.g., stats on the ASTs, stats on the number of random\n paths in ASTs, code length in tokens, etc.)\n\n* I'm not convinced that the task of \"extreme code summarization\" is a\n meaningful task. My main problem with it is that the performance of\n a human on this task would not be that great. On one hand humans\n (I'm referring to \"programming humans\" :) ) have no problem in\n coming up with a name for a function body; however, I'm not\n convinced they could predict the \"gold\" standard. Or, another way of\n thinking about this, if you have 3 humans who provided names for the\n same function, my guess it that there will be a wide degree of\n (dis)agreement. Some of the examples provided in the supplementary\n material can serve as confirmation bias to my thought :): Fig 7. I\n claim \"choose random prime\" and \"generate prime number\" are\n semantically close, however, the precision and recall for this\n example are both low. All this being said, I understand that it's a\n task for which data can be generated fairly quickly to feed the\n (beast) NN and that helps pushing the needle in understanding code\n semantics.\n\n* It would be nice to see \"exact match\" as one of the metrics (it is\n probably low, judging by F1 scores, but good to be reported).\n\n* Most likely the following paper could be cited in the related work:\nNeural Code Comprehension: A Learnable Representation of Code Semantics\nhttps://arxiv.org/abs/1806.07336\nhttps://nips.cc/Conferences/2018/Schedule?showEvent=11359\n\nPage 5 first phrase at the top, perhaps zi is a typo and it is\nsupposed to be z1?\n\n----\n\nUpdate: after all the discussion, I'm lowering my score a bit while still hoping the paper will get published. I'm satisfied with the results and the improvement of the paper. I still find it a bit surprising that the pairs of literals/leaves in the tree are a good approximation for the program itself (as shown in one of the ablation study).\n", "We would like to thank the reviewers for all their excellent suggestions! We feel that they have helped us improve the paper. \n\nFollowing comments and suggestions by the reviewers, we performed the following major experiments so far:\n* Performed an evaluation of our model on code2vec's dataset (which showed an improvement over their results in 8.2 F1 points).\n* Performed an evaluation of our model on Hu et al.'s (ICPC'2018) dataset (which showed an absolute gain of 5.5 BLEU points, 60% relative).\n* Performed an additional experiment of BiLSTMs without a decoder (which showed that our model without a decoder achieves an absolute gain of 16 F1 points over a BiLSTM without a decoder).\n* Performed an additional analysis of our test set with respect to input code size (which showed that our model consistently outperforms the baselines across short and long code snippets).\n\n* We could not directly compare to the following suggested works: \n + GGNNs (Allamanis et al., ICLR'2018) - because their model is not able to generate sequences nor to encode a whole snippet of code, but can only represent atomic elements such as variables.\n + Tree2tree (Chen et al., 2018) is not publicly available and addresses a different task.\n + Related works such as Piech et al. (ICML'2015), Kipf et al. (ICLR'2017), Mou et al. (AAAI'2016) are all interesting directions but do not generate sequences, and adapting them to our tasks is a contribution of its own. Bastings et al.'s (EMNLP'2017) approach cannot be applied to source code in a straightforward way.\n \nWe included references to Chen et al. (2018), Piech et al. (ICML'2015) and Bastings et al. (EMNLP'2017). We are not aware of any other work (except for baselines that we already used) that targets a task of code-to-sequence and is reproducible. We will make all of our code publicly available and reusable such that future works will have concrete and reproducible baselines. Our datasets are already publicly available and used in works such as [a].\n\n[a] Anonymous, Structured Neural Summarization, Under review for ICLR'19 - https://openreview.net/forum?id=H1ersoRqtm\n", "Thank you.\nWe included references to Chen et al. and Bastings et al. in the Related Work section.", "Thank you for your questions and comments.\n\n| novelty on the encoder side ... there it seems like a small modification over the code2vec \n| paper.\nBesides the decoder, our model has several conceptual advantages over code2vec: it is an order of magnitude lighter, and it can represent *any* path while code2vec can represent only monolithically observed paths. Additionally, it performs better than code2vec even without a decoder. Although it shares some ideas with code2vec, our work is a milestone which further increases our understanding of the abilities of neural networks in this area of encoding source code and transforming it to other modalities.\n\n| Why can't the encoder be a Tree-RNN (ICML'2015) or a GGNN (Allamanis et al 2018) with a\n| similar LSTM decoder over sub-tokens? ... Without comparisons to them, it is difficult to know\n| whether those models are better at capturing syntactic structure of programs or if the code2seq\n| encoder model is better.\nThe GGNN model of Allamanis et al (ICLR'2018) cannot encode an entire AST as our model. In that model, every identifier in the program is assigned a graph node. For example, in the statement \"x = y;\" each of \"x\" and \"y\" is assigned a graph node, with an edge that represents \"ComputedFrom\" between them. Then, representations of these identifiers are updated by propagating information from their neighbor nodes. The final outcome is a vector representation for each node (i.e., a variable or a constant), but there is no representation for the entire program. Thus, it is not a substitution for a program encoder, and adapting it to encode a whole AST is an interesting, non-trivial, direction on its own.\n\nWe agree that Tree-RNNs can encode source code and use an LSTM decoder, and we will include a reference to the ICML'2015 work. However, the fact remains that no work has done that for the difficult task of generating sequences. The closest work which encoded code using Tree-RNNs and generated sequences is the work of Liang et al. (AAAI'2018) - which did not show promising results: their model performed better than NMT baselines only in some of their benchmarks and only because they hindered their baselines by depriving them of non-alphanumerical characters. We agree that exploring other approaches for encoding source code is a fascinating direction for future research.\n", "Thank you for addressing my comments. Regarding Q2, I understand your position and the unfortunate situation regarding the code of Chen et al.\n\nAlthough I still think that a comparison with TreeLSTMs and/or GCNs would be useful, taking into consideration the unavailability of Chen et al. code, if the AC and the other reviewers do not see this as an issue and agree with the authors' position that the comparison with existing baselines is sufficient, I will _not_ further argue for rejecting this paper.", "Thank you for increasing your rating and for your suggestions.\n\nQ2: Thank you for both references. We contacted Chen et al, but they were not able to provide their code at this time.\nAs for replicating additional NMT baselines: there are dozens of interesting NMT approaches presented at each conference, and it is not reasonable for us to replicate them all for our code2seq benchmarks. Therefore, we focused on the two most popular seq2seq approaches, i.e., LSTMs and transformers.\n\nIt is also important to emphasize that applying graph convolutions to code is not a straightforward application of Bastings et al's approach. For example, a fundamental difference between dependency trees in natural language and ASTs is that all non-terminals in an AST are of a very limited vocabulary, whereas almost any natural-language word can be a non-terminal in a dependency parse. We agree that exploring graph convolutions for the task of code-to-sequence is an interesting direction for future research.\n\nQ4: This is a great idea that will further strengthen our paper. We will add a graph that shows how the performance of different models changes as the size of the input method grows. This will be added within this discussion period.\n", "Thanks for the clarification regarding the results presented in code2vec (Alon et al. 2018a) paper, and the choice of different k values. I think it might be interesting to add results (with accuracy numbers) regarding different k values.\n\n| The main difference between code2vec and code2seq is inherent even in the name: our model generates a\n| sequence, whereas code2vec is a classifier. While the two share some ideas on the encoder side, to say that our\n| code2seq approach is not novel because code2vec already exists is akin to saying that seq2seq NMT is not novel\n| because we already had text categorization models.\n\nYes, the difference between generating a single output and sequence is important, but isn't the decoder simply an LSTM generating sequence of sub-tokens while attending over the path summaries? Since a similar decoder was proposed in Allamanis et al. (2016) to generate sequences of sub-tokens, I was focusing the novelty more on the encoder side of the model (which is quite interesting), but there it seems like a small modification over the code2vec paper.\n\n| We could not compare directly to [1] (ICML'2015) or the GGNN work (Allamanis et al 2018) because they cannot \n| generate sequences and are thus incomparable to our work. We think that deliberately paralyzing our model's\n| decoder and attention mechanism just to compare to older models (from 2015) and tasks goes against the main\n| idea of our work: to generate sequences from code - a task which none of these works solve.\n \nI am not sure I fully understand this comment. Why can't the encoder be a Tree-RNN [1] (ICML'2015) or a GGNN (Allamanis et al 2018) with a similar LSTM decoder over sub-tokens? I believe one of the key findings of the paper is that the syntactic structure of programs is important, and the present encoder does a great job at representing programs using a summary of randomly selected paths. But these other papers also use different models to capture the structure of programs and show that it helps there as well. Without comparisons to them, it is difficult to know whether those models are better at capturing syntactic structure of programs or if the code2seq encoder model is better.", "We updated our submission to include a great suggestion by Reviewer 2:\n\n- We added a graph of performance compared to the number of input code lines to Appendix A (page 13), with a reference from Section 4.\n- Table of statistics (Table 5): we added the average number of lines in each dataset (tl;dr: around 6.5 in Java methods, and 8.3 in C# StackOverflow posts )\n\nThank you for this fruitful suggestion.", "Thank you for the response. You have addressed many of my questions and I have increased my rating by one. However, I would like to further discuss Q2 and Q4, which I find important.\n\nQ2: Indeed the cited papers are not directly applicable to your task and would require some small modifications, which nevertheless are not an unreasonable ask for an ICLR submission. Also, there are papers based on the principles of [a] and [b] that are directly applicable. This is a point that AnonReviwer3 also discusses.\n\n* Chen et al. [c] can be used as a TreeLSTM-based baseline for your task. While their model's decoder is a TreeLSTM one can easily pass a sequence as a degenerate tree to the decoder, without even modifying the code. Removing the existing decoder for an LSTM should not be hard either.\n* Bastings et al. [d] has a GCN-based encoder/LSTM-decoder baseline that seems directly applicable to your setting.\n\nComparing with these models will allow readers to understand the trade-offs involved and the weaknesses of each model.\n\nQ4: Your response makes sense and although I won't insist on this, it would be nice to backup your claim with a graph that shows how the performance of the different models decreases as the size of the input method grows.\n\n\n\n[c] Chen, X., Liu, C. and Song, D., 2018. Tree-to-tree Neural Networks for Program Translation. arXiv preprint arXiv:1802.03691.\n[d] Bastings, J., Titov, I., Aziz, W., Marcheggiani, D. and Sima'an, K., 2017. Graph convolutional encoders for syntax-aware neural machine translation. arXiv preprint arXiv:1704.04675.\n\n", "We updated our submission to address some of the comments raised by the reviewers and the AC:\n\n- We improved our wording in \"the first work to leverage the syntactic structure of code for end-to-end generation of sequences\" (Introduction), as suggested by the AC.\n- We included references to Oda et al. (ASE'2015), Hu et al. (ICPC'2018) and discussed the differences between our work and theirs in the Related Work [AC].\n- We further discussed the \"no AST nodes\" ablation and compared it conceptually to the Transformer in the Ablation Study [Reviewer1, AC].\n- We clarified that we use k paths for each sample in a single training step (and not only a single path) in Section 3.2 and Figure 3 [Reviewer1].\n- We discussed the choice of k=200 in Appendix A, with a reference from Section 4 (this was included in the initial version in Appendix E but was maybe easy to miss.) [Reviewer1,3]\n- We clarified that we train across all projects, rather than separately. [Reviewer1]\n- We clarified that the ablation study was performed on the *validation set* of Java-med in Table 3 (instead of \"dev set\"). [Reviewer1]\n- We added a table of statistics of all of the datasets in Appendix A with a reference from the Evaluation section. [Reviewer1,3]\n- We added an empirical comparison to code2vec *on their dataset* and a discussion of splitting by-project compared to by-file in Appendix A. [Reviewer2+3].\n- Fixed a typo in page 5 (k_1 instead of k_i) [Reviewer1]\n\nThank you again for your fruitful comments.\n", "Thank you for reading our work and responses, and for your insightful comments.\n\n| The paper says \"To the best of our knowledge, this is the first work to leverage the syntactic \n| structure of code for end-to-end generation of sequences.\" I'm not sure if this is 100% true.\n\nWe agree. We will refine our claim to \"the first work to directly use paths in the abstract syntax tree for end-to-end generation of sequences\". We will cite those works, thank you for bringing them to our attention.\n\n| \"Learning to Generate Pseudo-code from Source Code using Statistical Machine Translation,\" \n| Oda et al. ASE 2015.\n\nThank you for bringing this work to our attention. This may have been the first to generate sequences by leveraging the syntax. The problem in comparing our work with Oda et al.'s is that they perform line-by-line statistical machine translation (SMT), and can thus leverage the given line-alignment between the source code and target pseudocode. Our tasks are different, and we cannot assume a 1-to-1 alignment between elements in the input and the output; our tasks take a whole code snippet as their input and produce a much shorter sequence as output.\nA conceptual advantage of our model over line-by-line translation is that our model can capture multiline patterns in the source code. These multiline patterns are often very useful for the model and get the most attention (Figure 1(a)).\nPractically, we will not manage to adapt their model to Java/C# or adapt our model to Python within the discussion period. The closest baseline that we have is the MOSES SMT tool, which our model outperforms by a large gap. We will cite this work and discuss the differences.\n\n| \"Deep Code Comment Generation,\" Hu et al, ICPC'2018\n\nThis would be an interesting comparison; however, neither their code nor dataset are publicly available. We just emailed the author and asked for the code or dataset, and we will add this as a baseline if possible within the discussion period.\nAs you suggest, there is a conceptual difference between our approaches. Hu et al. linearize the AST, and then pass it on to a standard seq2seq model. We present a new model, in which the encoder already assumes that the input is tree-structured. We will cite this work and discuss the differences.\n\n| the \"No AST Nodes (only tokens)\" baseline was highly competitive, better than any of the \n| other tested methods. Do you have any idea why this would be the case? I couldn't think of\n| any reason why a method that only looked at pairs of tokens would do better than a method\n| like the transformer, which has much more expressive power, and can implicitly capture pairs\n| of tokens through self attention.\n\nThis surprised us as well. The transformer is indeed able to capture all pairs of tokens. However, *not all tokens are AST leaves*. By focusing on AST leaves, we are increasing the focus on named tokens, and effectively ignoring functional tokens like brackets, parentheses, semicolons, etc.\nTransformers can (in theory) capture the same signal, but perhaps they require significantly more layers or a different optimization to actually learn to focus on those particular elements. The AST gives us this information for free without having to spend more transformer layers just to learn it. \nAdditionally, for practical reasons, we limited the length of the paths to 9. This leads to pairs of leaves that are close in the AST, but not necessarily close in the sequence. In contrast, the transformer's attention is effectively skewed towards sequential proximity because of the positional embeddings. \nWe will include this discussion in our ablation study.\n", "Thank you for the quick and thorough author response, I trust it has addressed some reviewer concerns. (Reviewers, please take not and respond/change scores as necessary!) I just had two other quick clarifications.\n\nFirst, the paper says \"To the best of our knowledge, this is the first work to leverage the syntactic structure of code for end-to-end generation of sequences.\" I'm not sure if this is 100% true, and it should probably be clarified with respect to the following two works:\n* One of the first works on using machine learning methods for code commenting, albeit not in the neural framework, used a model that transformed AST trees into pseudocode: \"Learning to Generate Pseudo-code from Source Code using Statistical Machine Translation,\" Oda et al. ASE 2015.\n* Recently, there has been a paper that used AST structure, although linearized into a sequence, for neural code comment generation: \"Deep Code Comment Generation,\" Hu et al. ICPC 2018.\nThe second is an easy baseline that could be added, and the first has been tested on the publicly available Django code commenting dataset, so an empirical comparison may be feasible.\n\nSecond, I had a question about the ablation:\nI found it quite puzzling that the \"No AST Nodes (only tokens)\" baseline was highly competitive, better than any of the other tested methods. Do you have any idea why this would be the case? I couldn't think of any reason why a method that only looked at pairs of tokens would do better than a method like the transformer, which has much more expressive power, and can implicitly capture pairs of tokens through self attention.", "Thank you for your detailed review. You raise many important points, which we think are all addressable within the discussion phase. Please see our detailed response below.\n\n| Q1: Could the authors compare code2seq with an ablation of a 2-layer BiLSTM where the\n| decoder predicts the output as a single token (similar to the \"no decoder\" ablation of\n| code2vec)?\n| Comparing this result to the \"no decoder\" ablation of code2seq will show the extent to which\n| code2seq's performance is due to its code encoding or if code2vec with an LSTM decoder\n| output would have sufficed.\n\nWe definitely can. The results for this baseline (2-layer BiLSTM encoder with single-token prediction, with the same size of target vocab as in our \"no decoder\") are: \nPrecision: 31.42, Recall: 13.92, F1: 19.29\nOur model with \"no decoder\" on the same dataset (Table 3): \nPrecision: 47.99, Recall: 28.96, F1: 36.12.\n\n| Q2: Using the BiLSTM and the Transformer as baselines seems reasonable but there are\n| other existing models such as Tree LSTMs, Graph Convolutional Neural Networks [a] and\n| TBCNNs [b] that could also be strong baselines which take tree structure into account. \n\nWe did our best to find and re-train *any* baselines that we could find and are relevant for the task of generating sequences from code. We could not compare to the Tree-RNN of Liang and Zhu (AAAI'2018) because of replicability issues in their work (as described in the footnote of page 5); the Graph Convolutional model of [a] was designed for classification only; the TBCNN work of [b] addressed sentiment *classification* in NLP; the model of Mou et al (AAAI'2016) is also a classifier and was applied to code classification tasks and to detection of bubble sort. All of these classification models cannot generate sequences and are thus incomparable to our work. \nWe think that deliberately paralyzing our model's decoder and attention mechanism just to compare to older models (from 2015-2016) and tasks goes against the main idea of our work: to generate sequences from code - a task which none of these works solve.\n\n| Q3: code2vec achieves the best performance in Alon et al (2018b) but it seems\n| to be performing badly in this work.\n\nWe agree that this is a little confusing, we will clarify the following point in the paper:\non their dataset, our model gets precision of 70.2 (vs. their 63.1), recall of 63.3 (vs. their 54.4) and F1 of 66.6 (vs. 58.4). We believe that the reason for the lower results of code2vec on our datasets is that their dataset is split to train/dev/test *by-file*, while in our datasets we split always *by project*. In their dataset, a file can be in the training set, while another file from the same project can be in the test set. This makes their dataset significantly easier, because method names often \"leak\" to other files in the same project, and there are often duplicates in different files of the same project. This is consistent with Allamanis et al (ICLR'2018) who found that splitting by-file makes the dataset easier than by-project.\nWe decided to take the stricter approach, and not to use their dataset (even though our model achieves better results on it), in order to make all of our comparisons on split-by-project datasets. We will add the results on their per-file split dataset to the appendix, although we advocate for using the harder per-project split.\n\n| Q4: The strategy of enumerating paths in the tree seems to be problematic for large files of\n| code. It is unclear how the authors (a) do an unbiased sample of the paths. Do they need to\n| first enumerate all of them and pick at uniform? (b) since the authors pick $k$ paths for each\n| sample, this may imply that the larger the tree, the worse the performance of code2seq.\n\nLarge files are not a problem because we worked on method level, but we agree that maybe extremely huge *methods* are. \n(a) We enumerate all of the paths in advance and sample them uniformly at training time (a different subset is sampled on every training iteration). This is a technical detail; in a future implementation we plan to generate random paths on CPU in parallel to training on GPU.\n(b) We did observe that scores on larger Java methods are a little lower than on short methods, but mostly because the names are more diverse in long methods. This was even more apparent for the seq2seq baselines, because large Java methods are sometimes more than 5000 tokens long, and are thus very difficult to digest for the BiLSTM and the Transformer. \nSince we evaluated our model and the baselines on StackOverflow snippets and Java methods, the size of the code was not much of a problem. If a future dataset contained extremely large code snippets, the value of k can be easily increased (maybe at the cost of a smaller batch size). Remember that k does not need to be as large as the number of existing paths, since the information contained in a missing path is often \"covered\" by other paths.", "Thank you for your detailed review. You raise many important points, which we think are all addressable within the discussion phase. Please see our detailed response below.\n\n| The novelty of the code2seq model is somewhat limited compared to the model presented in\n| code2vec (Alon et al. 2018a) paper. \n\nThe main difference between code2vec and code2seq is inherent even in the name: our model generates a sequence, whereas code2vec is a classifier. While the two share some ideas on the encoder side, to say that our code2seq approach is not novel because code2vec already exists is akin to saying that seq2seq NMT is not novel because we already had text categorization models.\nAll of these advantages over code2vec give our model 40%-230% better results than code2vec on the code summarization task, while code2vec definitely cannot perform the code captioning task (Section 4.2). Our work sets a new state of the art for code-to-natural-language tasks, with a model that is much more compact in terms of the number of parameters (code2seq has an order-of-magnitude fewer parameters compared to code2vec!)..\n\n| For the code summarization evaluation, would it be possible to evaluate the code2seq model\n| on the dataset used by the code2vec paper? On that dataset, the code2vec approach gets a\n| precision score of 63.1, recall of 54.4, and F1 score of 58.4.\n\nWe agree that this is a little confusing, we will clarify the following point in the paper:\non their dataset, our model gets precision of 70.2 (vs. their 63.1), recall of 63.3 (vs. their 54.4) and F1 of 66.6 (vs. 58.4). We believe that the reason for the lower results of code2vec on our datasets is that their dataset is split to train/dev/test *by-file*, while in our datasets we split always *by project*. In their dataset, a file can be in the training set, while another file from the same project can be in the test set. This makes their dataset significantly easier, because method names often \"leak\" to other files in the same project, and there are often duplicates in different files of the same project. This is consistent with Allamanis et al (ICLR'2018) who found that splitting by-file makes the dataset easier than by-project.\nWe decided to take the stricter approach, and not to use their dataset (even though our model achieves better results on it), in order to make all of our comparisons on split-by-project datasets. We will add the results on their per-file split dataset to the appendix, although we advocate for using the harder per-project split.\n\n\n| One of the key findings of the paper is that syntactic structure of programs is important to\n| encode. Similar observations have been made in other program embedding papers that use\n| for example Tree-RNN [1] or graph neural networks (GNN) [Allamanis et al. 2018]. It would be\n| quite valuable to compare the current results with the Tree-RNN or GNN models \n\nWe did our best to find and re-train *any* baselines that we could find and are relevant for the task of generating sequences from code. We could not compare directly to [1] (ICML'2015) or the GGNN work (Allamanis et al 2018) because they cannot generate sequences and are thus incomparable to our work. \nWe think that deliberately paralyzing our model's decoder and attention mechanism just to compare to older models (from 2015) and tasks goes against the main idea of our work: to generate sequences from code - a task which none of these works solve.\n\n\n| The value of k=200 seems a bit large for the examples presented in the paper. What happens\n| when smaller values of k are used (e.g. k=10, 20?) What are the average number of paths in\n| the java programs in the dataset?\n\nIf an example contains less than 200 paths, we simply take all of them. A \"too large\" value of k is not a problem.\nThe average number of paths in our Java-large training set is 220 paths per example. For some large methods, a number as high as 200 is beneficial. We empirically experimented with the value of k but did not include it because of space limitations. Lower values than k=100 show worse results, increasing to k=200 shows a minor improvement, above k=200 there is no improvement. Practically, k=200 was found to be a reasonable sweet spot between capturing enough information while keeping training feasible in the GPU's memory.\nWe will include those and other statistics of the datasets.\n", "Thank you for reviewing our work so kindly.\n\n| Can you provide some intuition on why random paths in the AST encode the \"meaning\" of the code? \n\nWe think that AST paths are a useful decomposition of the AST because they capture long-range relations and interactions in a relatively short sequence of symbols. For example, the green path (3) in Figure 1(a) tells us that a variable named \"set\" is iterated in a \"For\" loop, and then the method \"equalsIgnoreCase\" is called on an object inside the condition of an \"If\" statement, which resides inside the loop body. This path contains a lot of information in a relatively short sequence of symbols (6 nodes + 2 terminal values). Additionally, these paths are robust to mutations - even if we inserted an additional statement between the \"For\" and the \"If\" lines - this path will still be kept (in contrast to the sequential NMT baselines, in which insertions will push \"set\" and \"equalsIgnoreCase\" farther away from each other). \n\n| When you perform the encoding of the function body, one sample in a training step contains all\n| the k (k = 200) paths and all the 2*k terminals (end of Section 2)? Or one path at a time (Section 3.2)?\n\nOne sample in a training step contains all the k=200 sampled paths and 2*k terminals. In Figure 3, the green boxes are all the k=200 paths with their terminals. \nThe attention mechanism is used to dynamically select a distribution over these 200 paths while decoding, just like a NMT model would attend over the source tokens.\nWe will clarify this in Section 3.2.\n\n\n| Can you explain how you came up with k = 200? I think providing some stats on the dataset could be helpful to understand this number.\n\nThe average number of paths in our Java-large training set is 220 paths per example. For some large methods, a number as high as 200 is beneficial. We empirically experimented with the value of k but did not include it because of space limitations. Lower values than k=100 show worse results, increasing to k=200 shows a minor improvement, and for k>200 there is no significant improvement. Practically, k=200 was found to be a reasonable sweet spot between capturing enough information while keeping training feasible in the GPU's memory.\nWe will include those and other statistics of the datasets.\n\n| The results for the baselines - do you train across all projects? (As you point out,\n| ConvAttention trained separately, curious whether it makes a difference for the 2 datasets med\n| and large not present in the original paper).\n\nIn all of our experiments we always train across all projects. ConvAttention trained separately in their original paper, but when we retrained their model we trained it (and our model) across multiple projects and tested on other projects. Training and testing within the same project makes the problem significantly easier (as Allamanis et al (ICML'2016) also note). In our benchmarks, we test how well a model can generalize to completely unseen projects.\n\n| I'm not sure I understand parts of the ablation study. In particular for point 1., it seems that\n| instead of the AST, only the terminal nodes are used. Do you still use 200 random pairs of\n| terminal? Is this equivalent to a (randomly shuffled) subset of the tokens in the program? \n\nWe still use 200 random pairs of terminals, yes. Ablation #1 (no AST nodes) is equivalent to a random subset of *pairs* of tokens, which is more informative than just a subset of the tokens. We hypothesize that the fully connected layer learns how each pair of tokens interact with each other, and attending over those interactions it is a little more powerful than attending over each token separately.\n\n| why you do the ablation study on the validation set of the medium dataset? In fact, the caption\n| of Table 3 says it's done on the dev set. \n\nThe choice for the dataset in the ablation study could have been different - we hoped that since Java-med contains the top 1000 projects in Github, it is both large enough and of a high enough code quality to easily observe the effect of the ablations. \nWe performed it on the validation set of Java-med. The development (dev) set refers to the validation set; we will clarify the terminology in Table 3.\n\n\n| I would have liked to see more details on the datasets introduced, in particular wrt metrics that\n| are relevant for training the model you describe (e.g., stats on the ASTs, stats on the number of\n| random paths in ASTs, code length in tokens, etc.)\n\nWe agree. We will include additional stats. \n", "| I'm not convinced that the task of \"extreme code summarization\" is a meaningful task. My\n| main problem with it is that the performance of a human on this task would not be that great.\n| On one hand humans (I'm referring to \"programming humans\" :) ) have no problem in coming\n| up with a name for a function body; however, I'm not convinced they could predict the \"gold\"\n| standard. Or, another way of thinking about this, if you have 3 humans who provided names for\n| the... All this being said, I understand that it's a task for which data can be generated fairly\n| quickly to feed the (beast) NN and that helps pushing the needle in understanding code semantics.\n\nWe agree that code summarization tasks are difficult to measure and evaluate. For this reason, we follow the standard practice of the existing literature and used tasks and metrics that were introduced by previous work (Iyer et al., ACL 2016; Allamanis et al. ICML 2016). Since the improvement of our model over the baselines is substantial and consistent across datasets and tasks, we believe that our model is better at modeling the data and is thus an important contribution.\n\n| It would be nice to see \"exact match\" as one of the metrics (it is probably low, judging by F1 scores, but good to be reported).\n\nExact match accuracy for our model: on Java-large: 35.0%, Java-med: 29.5%, Java-small: 15.4%. We did not include these results because they are mostly correlated with the F1 scores, and we feel that precision/recall is more informative.\n\n| Most likely the following paper could be cited in the related work: Neural Code Comprehension: A Learnable Representation of Code Semantics\n\nThank you, we will include a reference.\n\n| Page 5 first phrase at the top, perhaps zi is a typo and it is supposed to be z1?\n\nThank you, this is indeed a typo and should be z1.\n", "This paper presents a new code-to-sequence model called code2seq that leverages the syntactic structure of programming languages to encode source code snippets, which is then decoded to natural language using a sequence decoder. The key idea of the approach is to represent a program using a set of randomly sample k paths in its abstract syntax tree. For each path, the path is encoded using a recurrent network and concatenated with the embeddings of the two leaf terminal values of the path. The path encodings are then averaged to obtain the program embedding, which is then used to initialize a sequence decoder that also attends over the path embeddings. The code2vec model is evaluated over two tasks: 1) Code summarization: predicting a method’s name from its body, and 2) Code captioning: generating a natural language sentence from method’s body depicting its functionality. The code2seq model significantly outperforms the other baseline methods, and the ablation study shows the importance of various design choices.\n\nThis paper presents an elegant way to represent programs using a set of paths in the AST, which are then weighted using an attention mechanism to attend over relevant path components. The code2seq model is extensively evaluated over two domains of code summarization and code captioning, and results show significant improvements.\n\nThe novelty of the code2seq model is somewhat limited compared to the model presented in code2vec (Alon et al. 2018a) paper. In code2vec, a program is encoded as a set of paths, where each path comes from a fixed vocabulary. The code2seq model instead uses an LSTM to encode individual paths, which allows it to generalize to new paths. This is a more natural choice for embedding paths, but it doesn’t appear to be a big conceptual advance in the model architecture. The use of subtoken embeddings for encoding/decoding identifier names is different in code2seq, but it has been proposed earlier in other code embedding models.\n\nFor the code summarization evaluation, would it be possible to evaluate the code2seq model on the dataset used by the code2vec paper? On that dataset, the code2vec approach gets a precision score of 63.1, recall of 54.4, and F1 score of 58.4, [Table 3 on page 18] which are comparable to overall scores of the code2seq model.\n\nOne of the key findings of the paper is that syntactic structure of programs is important to encode. Similar observations have been made in other program embedding papers that use for example Tree-RNN [1] or graph neural networks (GNN) [Allamanis et al. 2018]. It would be quite valuable to compare the current results with the Tree-RNN or GNN models (without performing additional dataflow and control-flow post processing) to see how well the paths-based embeddings work in comparison to these models.\n\nThe value of k=200 seems a bit large for the examples presented in the paper. What happens when smaller values of k are used (e.g. k=10, 20?) What are the average number of paths in the java programs in the dataset?\n\n1. Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, Leonidas Guibas. Learning Program Embeddings to Propagate Feedback on Student Code\nICML 2015\n" ]
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_H1gKYo09tX", "iclr_2019_H1gKYo09tX", "iclr_2019_H1gKYo09tX", "rylTcFveA7", "S1lPHIbk0m", "H1gSlHAA6m", "BJlBqinKT7", "ByeVmSMGTX", "iclr_2019_H1gKYo09tX", "BygxsiSMpX", "iclr_2019_H1gKYo09tX", "HygBjl9m6Q", "iclr_2019_H1gKYo09tX", "rkgOIg_yaX", "BJgTXkO53Q", "rkxpuJ9qn7", "SJxc-Nzf6m", "iclr_2019_H1gKYo09tX" ]
iclr_2019_H1gL-2A9Ym
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online.
accepted-poster-papers
There were several ambivalent reviews for this submission and one favorable one. Although this is a difficult case, I am recommending accepting the paper. There were two main questions in my mind. 1. Did the authors justify that the limited neighborhood problem they try to fix with their method is a real problem and that they fixed it? If so, accept. Here I believe evidence has been presented, but the case remains undecided. 2. If they have not, is the method/experiments sufficiently useful to be interesting anyway? This question I would lean towards answering in the affirmative. I believe the paper as a whole is sufficiently interesting and executed sufficiently well to be accepted, although I was not convinced of the first point (1) above. One review voting to reject did not find the conceptual contribution very valuable but still thought the paper was not severely flawed. I am partly down-weighting the conceptual criticism they made. I am more concerned with experimental issues. However, I did not see sufficiently severe issues raised by the reviewers to justify rejection. Ultimately, I could go either way on this case, but I think some members of the community will benefit from reading this work enough that it should be accepted.
train
[ "rylzwtw3JV", "ryeRGUehk4", "ryl2um9KJV", "SygzJkminQ", "HkxoIiqdk4", "B1xn8s4o6m", "HJg43sI_6Q", "S1lvvM8OTX", "BkxnHzUdaQ", "HkeSQM8Oa7", "S1ga6ZIuT7", "S1e-m_U5nX", "Bkxy1bhPnX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer,\n\nThank you for clarifying your review and reconsidering and upgrading your score!\n\nWe would like to point out that Laplacian feature propagation is just that very basic PPR-based baseline you wanted to see -- it uses PPR-like feature propagation in combination with logistic regression.\n\nSince we both agree that LASAGNE falls into a different category of methods and that we use PPR in a very different way (to propagate information instead of sampling contexts for a skip-gram model), we are not quite sure what work you are referring to that reduces the novelty value of our method. Our model's simplicity might make it seem like a minor contribution but it also makes the model easy to implement, train, optimize, extend and scale. E.g. note that GNNs with many layers suffer from difficulties in gradient-based training, while our method (thanks to the decoupling of the propagation step) does not, making it more suitable to use in practice.", "** Issue of Limited Range **\n\nEvidence for showing that larger neighborhoods are beneficial is shown e.g. in Figures 4 and 5 of the paper. Figure 4 shows how the accuracy increases dramatically on Cora-ML and PubMed as we increase the number of propagation steps beyond 2. Figure 5 shows that the optimal α lies between 0.05 and 0.2. For these values, between 86% and 51% of the influence comes from neighborhoods using more than 2 propagation steps.\n\nFurthermore, larger neighborhoods are especially important in the sparsely labelled setting, as shown by Li, Han and Wu (AAAI 2018) and in Figure 3 of our paper. This figure shows that our method can handle small training sets best and outperforms GCN by 6 percentage points in this setting.\n\nXu et al. (ICML 2018) have also found the limited range to be an issue, especially for nodes in the periphery. Very little information will reach these nodes with only 2 hops and a higher range is therefore critical for classifying these.\n\n** Oversmoothing and attention-like mechanisms **\n\nAn attention-like mechanism for working with multiple different neighborhood sizes was already investigated in previous work by Xu et al. in the jumping knowledge networks (JK) model. However, for most experiments they still achieved best performance when using only 2-3 layers. In our own experiments we have found JK to perform best with only 3 layers and in the paper we show that our new model significantly outperforms it.\n\nIn earlier experiments we have also tested attention over different neighborhood sizes in combination with our model, but found that learning the attention weights is problematic and mostly overfits on the node itself. Please note that our personalized PageRank uses an *implicit* attention scheme on the different neighborhoods with weights α(1-α)^k (for the k-step neighborhood), which we have found to perform significantly better than any other weighting scheme we have tested. This implicit attention mechanism might be one reason why our model performs so well.\n\nWe have also experimented with increasing the number of layers in GAT (which uses attention for its node aggregation function), but were not able to successfully increase its number of layers beyond the original 2. \n\nFinally, different node aggregation functions were used by e.g. GraphSAGE, which also performs best when using no more than 2 layers and therefore shows the same problem of limited range.", "Dear authors, \n\nI would like to thank you for the detailed response(s) to the review(s you have received). I would also like to make a related comment: I agree with your comments overall, modulo that there has been any confusion. Your experimental setup was clear from the first time I read your nice paper, that is why I mentioned in one of comments that \" While according to the authors’ categorization of the existing methods in the intro, LASAGNE falls under the “random walk” family of methods\". Perhaps I should have made it more clear in my review, that personally as a reviewer I would have liked to see some basic classification baseline that is related to PPR, that was my main point and why I made two possible suggestions.\n\nI have upgraded my score. I want to clarify that my non-acceptance score as my review title summarized from early on was not due to this baseline comparison fact (besides you compared with other state-of-the-art related methods), but due to the fact that I personally found the contribution to be (on the one hand *interesting* but on the other hand) limited from a novelty perspective. ", "The thurst behind this paper is that graph convolutional networks (GCNs) are constrained by construction\nto focus on small neighborhoods around any given node. Large neighborhoods introduce in principle\na large number of parameters (while as the authors point out, weight sharing is an option to avoid this issue), \nplus even worse oversmoothing may occur. Specifically, Xu et al. (2018) showed that for a k-layer GCN one can \nthink of the influence score of a node x on node y as the probability that a walker that starts at x, \nlands on y after k steps of random walk (modulo some details). \n\nTherefore, as k increases the random walks reaches its stationary distribution, forgetting any local information that is useful, \ne.g., for node classification. To avoid this problem, the authors propose the following: use personalized Pagerank\ninstead of the standard Markov chain of Pagerank. In PPR there is a restart probability, which allows \ntheir algorithm to avoid “forgetting” the local information around a walk, thus allowing for an arbitrary \nnumber of steps in their random walk. The authors define two methods PEP, and PEPa based on PPR. The latter \nmethod is faster in practice since it approximates the PPR. \n\nA key advantage of the proposed method is the separation of the node embedding part from the propagation scheme. In this sense, \nfollowing the categorization of existing methods into three categories, PEP is a hybrid of message passing algorithms,\nand random walk based node embeddings. The experimental evaluation tests certain basic properties of the proposed method. One interesting performance feature of \nPEP and PEPa is that they can perform well using few training examples. This is valuable especially when obtaining labeled\nexamples is expensive. Finally, the authors compare their proposed methods against state-of-the-art GCN-based methods. \n\nSome remarks follow. \n\n- The idea of using PPR for node embeddings has been suggested in recent prior work “LASAGNE: Locality and structure aware graph node embeddings” \nBy Faerman et al. While according to the authors’ categorization of the existing methods in the intro, LASAGNE \nfalls under the “random walk” family of methods, the authors should compare against it. \n \n- Continuing the previous point, even simpler baselines would be desirable. How inferior is for instance \nan approach on one-vs-all classification using the approximate personalized Pagerank node embedding and \nsupport vector machines? \n \n- Also, the authors mention “since our datasets are somewhat similar…”. Please clarify with respect to \nwhich aspects? Also, please use datasets that are different. For instance, see the LASAGNE paper for \nmore datasets that have different number of classes. \n\n- In the experiments the authors use two layers for fair comparison. Given that one of the advantages of the \nproposed method is the ability to have more layers without suffering from the GCN shortcomings \nwith large neighborhood exploration, it would be interesting to see an experiment where the number of layers is a variable. \n\n", "I believe the reviewer here meant \"substantial and practically meaningful\" and not \"statistically significant.\" \n\nYour point about graph diameter is a good one. However I am wondering if you can elaborate a bit on your argument in section 2 where you say:\n\n\"There are essentially two reasons why a message passing algorithm like GCN can’t be trivially expanded to use a larger neighborhood. First, aggregation by averaging causes oversmoothing if too many layers are used. It, therefore, loses its focus on the local neighborhood (Li et al., 2018). Second, most common aggregation schemes use learnable weight matrices in each layer. Therefore, using a larger neighborhood necessarily increases the depth and number of learnable parameters of the neural network (the second aspect can be circumvented by using weight sharing, which is typically not the case, though).\"\n\nIt seems fine to use weight sharing to deal with the second issue and I believe it isn't that uncommon. However, the oversmoothing issue could be a larger problem. Couldn't this be dealt with using attention-like mechanisms or different aggregation functions like max instead of sum (or intermediate functions)?\n\nAn average diameter of 10, the largest for datasets you explore, might not be enough to be problematic. Keeping in mind that I have not carefully read the paper, only skimmed it, can you succinctly summarize what evidence you have that limited range is an important issue in practice? I agree with the premise that it could be (because of the tying network depth or recurrent sequence length to neighborhood size is somewhat arbitrary), but I am wondering how best to demonstrate this is an issue and your approach is a successful solution on an important problem of practical interest.", "Thank you for your quick response!\n\nIf we understand you correctly, all your points above are referring to the study of larger graphs to ensure a large diameter (since, as mentioned in your first comment, a large diameter requires more propagation steps). Note, however, that the graph diameter usually shrinks with graph size (see e.g. Leskovec 2005). Thus, instead of studying even larger graphs one should analyze graphs with sufficiently large diameter. Indeed, the graphs we have already studied in our paper have an average diameter between 5 and 10 (see Table 1 of the revised version). Thus, a few GCN layers can not cover the entire graph.\n \nOur experiments further show that denser graphs with a smaller diameter (e.g. Microsoft Academic) require a higher alpha (see Figure 5). Your discussion actually prompted us to adjust alpha on this dataset to better reflect the graph’s underlying characteristics (see Section 6 of the revised version).\n\nFurthermore, we are not sure what exactly you mean with ‘significant’ -- and why you have the impression that our results are not significant. In our paper and comments we use the term significant in the mathematical sense of statistical significance. The results clearly show that our method’s improvements are significant with a p-value of 0.05, as we have shown in our rigorous evaluation (for small and large graphs as well as graphs with different diameters).", "Thanks for your reply!\n\nTo reiterate my questions:\n\n1) The graph with ~10k nodes would be the limit for your exact algorithm, as the results are missing in Table 2. But since you have the approximation with power-iteration like layers, it would be better if you can target on large graphs. \n\n2) And I expect your algorithm would benefit more on large graphs. This is the case where the pagerank could be more effective in propagating information, than parameterized message passing operators. So that's why it is important to do large scaled experiments to show the truly 'significant' gains. \n\n3) Here are several good large datasets you may want to take a look: https://snap.stanford.edu/data/", "Thank you for your review and feedback!\n\nYou are right, nothing prevents the model from using the standard transition matrix. During model development, however, we have found that the added self-loops of the GCN-matrix are beneficial to performance. The symmetrical normalization actually doesn't make any difference in the limit k->infinity. However, we found this style of normalization to be beneficial for the finite-step approximation. ", "Thank you for your review and feedback!\n\nThe connection to the GNN-framework is certainly interesting and we’ve added it in the revised version of the paper (in Section 3, after introducing APPNP). However, our main contribution is not the usage of fixed-point iterations for node classification, which has already been used e.g. in label propagation and belief propagation algorithms. Our contribution is the improvement of GCN-like models by solving the limited range problem through the development and thorough evaluation of an end-to-end trained model utilizing one specific fixed-point iteration.\n\nAs you correctly noticed, the exact model is not applicable to larger data -- this is exactly the reason why we have developed its approximation. The discussion can be found under \"efficiency analysis\" in Section 3. We have edited the experimental section to make this more clear. Furthermore, we would like to highlight that we have already performed an analysis on large graphs. As shown in Table 1, our experimental evaluation includes two graphs with 20k nodes, which follows the suggestion you gave (>10k nodes).\n\nPlease note that we have already compared our model to jumping knowledge networks (JK), which is similar to the GNN that uses proper gating/skip connections you suggested. As we show in the experimental section, we significantly outperform this model.\n\nYou state that we show \"some marginal gains\". However, we show that our results are significant. Previous methods have reported “large” gains that actually were not statistically significant and vanish when thoroughly evaluated, as we show in the paper. We paid a lot of attention to performing a fair comparison and a rigorous statistical analysis of our results, which shows that we significantly outperform previous models. The different evaluation may make the improvements seem smaller. But in fact they are larger than those reported in previous, less careful evaluations. We have edited the section to further clarify this. Furthermore, we’ve included a reference to the work by Dai et al.", "Thank you for your review and feedback!\n\nWe want to clarify that the principle and task performed by LASAGNE is fundamentally different to ours. The LASAGNE method learns individual node embeddings in an unsupervised setting. Our goal is not to learn individual node embeddings but to learn a transformation from attributes to class labels in the semi-supervised setting, as graph convolutional network (GCN)-like models do. Moreover, LASAGNE only considers structural information. Generally, it has been shown that approaches that consider both structure and attributes outperform methods that only consider the structure (see e.g. Kipf Welling 2017). Therefore, we only compare with methods that consider both, but we added a reference to LASAGNE in the paper.\n\nWe feel that this confusion was due to a bad framing of our model. To make things clearer we have decided to rename the model and replace the term “embedding” with “prediction” in the revised version (see also our general comment).\n\nWe cannot run the proposed baseline, since as we clarified above we do not learn any personalized pagerank embeddings to begin with. However, we do already include a comparatively simple baseline which is the bootstrapped Laplacian feature propagation. This method propagates features in a similar way as we do and then uses a one-vs-all classifier. We significantly outperform this baseline.\n\nIn the revised version of the paper we clarified that the datasets are similar in that they contain bag-of-words features and use scientific networks. However, these graphs have very different numbers of nodes, edges, features, and classes, and different topology, as shown in Table 1. The datasets you suggested from the LASAGNE paper are not suitable for the kind of semi-supervised classification we consider since they do not contain node attributes.\n\nThank you for suggesting the interesting experiment of varying neural network depth! The investigated datasets do not benefit from deeper networks. You can find the results in Figure 11 of the updated version of the paper.", "Dear reviewers, dear commenters,\nWe feel that the term \"embedding\" that we used in our work (and paper’s title) might be a source of confusion, which is why we have decided to replace it with “prediction” and rename the model. We want to clarify that we do NOT learn individual node embeddings as done in node embedding methods. We propagate the predictions as part of the end-to-end trained model. Please keep in mind that we did NOT change any part of the model except for the name.", "This paper proposed a variant of graph neural network, which added additional pagerank-like propagations (with constant aggregation weights), in additional to the normal message-passing like propagation layers. Experiments on some benchmark transductive node classification tasks show some empirical gains.\n\nUsing more propagations with constant aggregation weights is an interesting idea to help propagate the information in a graph. However, this idea is not completely new. In the very first graph neural network [1], the propagation is done until convergence. If the operator in each layer is a contraction map, then according to the Banach Fixed Point theorem [2], a unique solution can be guaranteed. The constant operator used in this paper is thus a special case of this contraction map.\n\nAlso, the closed form solution in (3) is not practical. It may not be suitable for large graphs (e.g., graphs with >10k nodes). And that’s why this approach is not suitable for Pubmed and Microsoft dataset. The PEP_A is more practical. However, in this case I’m curious how it would compare with a GNN having same number of layers, but with proper gating/skip connections like ResNet. \n\nThe experiments show some marginal gains on the small graphs. However, I think it would be important to test on large graphs. Since small graphs typically have small diameter, thus several GNN layers would already cover the entire graph, and the additional propagation done by pagerank here might not be super helpful. \n\nFinally, I think the author should properly cite another relevant paper [3], which uses fixed point iteration to help propagate the local information. \n\n[1] Scarselli et.al, “The Graph Neural Network Model”, IEEE Transactions on Neural Networks, 2009\n[2] Mohamed A. Khamsi, An Introduction to Metric Spaces and Fixed Point Theory\n[3] Dai et.al, Learning Steady-States of Iterative Algorithms over Graphs, ICML 2018", "This paper proposes a GCN variant that addresses a limitation of the original model, where embedding is propagated in only a few hops. The architectural difference may be explained in the following: GCN interleaves the individual node feature transformation and the single-hop propagation, whereas the proposed architecture first transforms the node features, followed by a propagation with an (in)finite number of hops. The propagation in the proposed method follows personalized PageRank, where in addition to following direct links, there is a nonzero probably jumping to a target node.\n\nI find the idea interesting. The experiments are comprehensive, covering important points including data split, training set size, number of hops, teleport probability, and ablation study. Two interesting take-home messages are that (1) GCN-like propagation without teleportation leads to degrading performance as the number of hops increases, whereas propagation with teleportation leads to converging performance; and (2) the best-performing teleport probability generally falls within a narrow range.\n\nQuestion: The current propagation approach uses the normalized adjacency matrix proposed by GCN, which is, strictly speaking, not the transition matrix used by PageRank. What prevents from using the transition matrix? Note that this matrix naturally handles directed graphs.\n" ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "ryl2um9KJV", "HkxoIiqdk4", "HkeSQM8Oa7", "iclr_2019_H1gL-2A9Ym", "B1xn8s4o6m", "HJg43sI_6Q", "BkxnHzUdaQ", "Bkxy1bhPnX", "S1e-m_U5nX", "SygzJkminQ", "iclr_2019_H1gL-2A9Ym", "iclr_2019_H1gL-2A9Ym", "iclr_2019_H1gL-2A9Ym" ]
iclr_2019_H1gMCsAqY7
Slimmable Neural Networks
We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks
accepted-poster-papers
This paper proposed a method that creates neural networks that can run under different resource constraints. The reviewers have consensus on accept. The pro is that the paper is novel and provides a practical approach to adjust model for different computation resource, and achieved performance improvement on object detection. One concern from reviewer2 and another public reviewer is the inconsistent performance impact on classification/detection (performance improvement on detection, but performance degradation on classification). Besides, the numbers reported in Table 1 should be confirmed: MobileNet v1 on Google Pixel 1 should have less than 120ms latency [1], not 296 ms. [1] Table 4 of https://arxiv.org/pdf/1801.04381.pdf
train
[ "Skxg-ZFjkE", "BJxHanYjJV", "S1g9rv-fJN", "SyeIKWl11N", "Byg13FQjRm", "HJgcCEmjCQ", "rkgxjlo9Am", "rylzSkDA6X", "r1xT_6ICTX", "HJe6pnUA6m", "HyeH_nUAaX", "rkgg7sI0TQ", "rkxP-cICpm", "ryl2OdD8p7", "r1gelBGgTm", "rklrgxdT37", "BJez1UBa27", "H1lytDzo2Q", "Byg-61r4n7" ]
[ "public", "author", "author", "public", "author", "public", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Dear authors,\n\nThis is a very interesting work. And I think it is closely related to the mutual learning frameworks [1,2], where the core idea is also to jointly train several models for improving the performance of training each model separately. The main difference is with/without weight sharing, which is one of the contributions of the paper. And I recommend you to cite these works in the paper. \n\n1: Zhang et al. \"Deep Mutual Learning\", CVPR2018. http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf\n\n2: Zhuang et al. \"Towards Effective Low-bitwidth Convolutional Neural Networks\", CVPR2018\nhttp://openaccess.thecvf.com/content_cvpr_2018/papers/Zhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf\n", "Thanks for your interest in our work! We will add the citation once revision period is re-opened. ", "Thanks for your interest in our work! However, we can not fully agree with your suggestion. Our reasons are summarized below:\n\n1. In your referenced paper [1], the major focus is to compress (Section 3) and sparsify/pruning filters, channels and layers with scheduling (Section 4), and get a \"nested sparse networks\". The resulted network can be used for model compression, knowledge distillation and hierarchical classification (Section 5).\nIn our work, the focus is not to compress, sparsify or pruning, but to simply train a single neural network executable at different width, with the spotlight on the accuracy/performance of standard image recognition benchmarks (ImageNet classification, COCO object detection, instance segmentation, keypoints detection). While the motivation is similar, our focus, methodology, analysis and experimental results are completely different.\n\n2. Moreover, the only related experiment, hierarchical classification, is also different to our experiments and standard benchmarks. In your referenced paper [1] in Section 5.3:\n\n\"We also provide experimental results on the ImageNet (ILSVRC 2012) dataset. From the dataset, we collected a subset, which consists of 100 diverse classes including natural objects, plants, animals, and artifacts.\"\n\nIn efficient deep learning, none of MobileNet v1 [2], MobileNet v2 [3], ShuffleNet [4] evaluate proposed methods on Cifar-10, Cifar-100 or sub-sampled \"100-class ImageNet\". Many methods that work on toy dataset can not generalize to real scenarios in the topic of efficient models, thus we think challenging settings like standard 1000-class ImageNet is essential to make the work solid and to ensure fair comparisons. Since the motivation is similar, we will be happy to add a citation in related work. We will always be happy to highlight and add comparison to any work that is related and has standard benchmark results.\n\n\n[1] Kim, Eunwoo, Chanho Ahn, and Songhwai Oh. \"Learning Nested Sparse Structures in Deep Neural Networks.\" arXiv preprint arXiv:1712.03781 (2017).\n[2] Howard, Andrew G., et al. \"Mobilenets: Efficient convolutional neural networks for mobile vision applications.\" arXiv preprint arXiv:1704.04861 (2017).\n[3] Sandler, Mark, et al. “MobileNet v2: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation.\" arXiv preprint arXiv:1801.04381 (2018).\n[4] Zhang et al. Shufflenet: An extremely efficientconvolutional neural network for mobile devices.arXiv preprint arXiv:1707.01083, 2017.", "This paper introduces a deep neural network that provides different inference paths with respect to different widths for accuracy-efficiency trade-off at test time, but the concept has been already introduced in prior work \n[Kim et al., NestedNet: Learning Nested Sparse Structures in Deep Neural Networks, CVPR, 2018]\nwhich suggests a nested network to produce multiple different inference paths with different widths (they call \"channel scheduling\" which is one of their strategies to allow multiple different sparse networks).\n\nExcept the missing related work, your paper still has value in terms of different methodology as well as promising experimental results including detection and semantic segmentation.\n\nIt would be good to not only introduce additional related work but make the contribution/positioning clear.", "Thanks for your interest in our work! We will add the citation once revision period is re-opened. Code will be released soon and we warmly welcome the community to work together on related topics!", "Very interesting work and congratulations! \n\nI am the first author of paper Runtime Neural Pruning (RNP, in NIPS 2017), where we also partitioned the channels of each convolutional layers into 4 equal sets and used a reinforcement learning agent to determine how many sets to run according to the difficulty of input images, in an incremental way. RNP can also adjust the workload according to the available hardware resources by adjusting the computation penalty.\n\nI think your paper has solved some of the training difficulty in RNP, and it would be very interesting to try a network-level dynamic inference according to the input image. Also, it would be very nice if you can include a reference to our paper. Thanks!", "Thanks for addressing my questions!", "Thanks for your review efforts! We have addressed all questions below:\n\n1. We aim to train single neural network executable at different widths. We find slimmable networks achieve better results especially for small models (e.g., 0.25x) on detection tasks. We have mentioned that it is probably due to implicit distillation, richer supervision and better learned representation (since detection results are based on pre-trained ImageNet learned representation). We try to avoid strong claims of any deep reason because none of them is strictly proved by us yet. Explaining deep reasons for improvements are not the motivation or the focus of this paper. But we are actively exploring on these questions!\n\n2. In fact, on average the image classification results are also improved (0.5 better top-1 accuracy in total), especially for small models. After submission, we have improved accuracy of S-ShuffleNet due to an additional ReLU layer (our implementation bug) between depthwise convolution and group convolution (Figure 2 of ShuffleNet [3]). Our models will be released.\n\n3. Thanks for the good suggestion! Currently we conduct detection experiments mainly on Detectron [1] and MMDetection [2] framework where ResNet-50 is among the most efficient models. We do value this suggestion and will try to implement mobilenet-based detectors. Besides, all code (including classification and detection) and pre-trained models will be released soon and we warmly welcome the community to work on together.\n\nThanks!\n\n\n[1] https://github.com/facebookresearch/Detectron\n[2] https://github.com/open-mmlab/mmdetection\n[3] Zhang et al. Shufflenet: An extremely efficientconvolutional neural network for mobile devices.arXiv preprint arXiv:1707.01083, 2017.", "Thanks for your review efforts! We have addressed all three questions below:\n\n1. As mentioned in Section 3.3, the only modification is to accumulate all gradients from different switches. It means that the optimizer (SGD for image recognition tasks) is exactly the same as training individual models (same momentum, etc.). The only difference is the value of gradient for each parameter. In Algorithm 1, we follow pytorch-style API and use optimizer.step() to indicate applying gradients. We have not observed any difficulty in optimization of slimmable networks using default optimizer in Algorithm 1.\n\n2. There is no \"unbalanced gradient\" problem in training slimmable networks (it may seem like so). The parameters of 0.25x seem to have \"more gradients\", but in the forward view, these parameters of 0.25x are also used four times in Net 0.25x, 0.5x, 0.75x and 1.0x. It means the parameters in 0.25x are more important for the overall performance of slimmable networks. In fact, back-propagation is strictly based on forward feature propagation. In the forward view, as mentioned in Section 3.3, our primary objective to train a slimmable network is to optimize its accuracy averaged from all switches.\n\n3. Our reported ResNet-50 accuracy is correct (23.9 top-1 error). We evaluate single-crop testing accuracy instead of 10-crop following all our baselines. The ResNet-50 single-crop testing accuracy is publicly reported in ResNeXt paper (Table 3, 1st row) [1], released code [2] and many other publications. Our ResNet-50 has same implementation with PyTorch official pre-trained model zoo [3] where the top-1 error is also 23.9 instead of <21% (in fact ResNet-152 still has > 21% single-crop top-1 error rate).\n\nWe sincerely hope the rating can be reconsidered if it was affected by above questions. Thanks for your time and review efforts!\n\n\n[1] Xie, Saining, et al. \"Aggregated residual transformations for deep neural networks.\" Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.\n[2] https://github.com/facebookresearch/ResNeXt\n[3] https://pytorch.org/docs/stable/torchvision/models.html", "Thanks for your positive review and encouragements! We also believe the discovery of slimmable network opens up the possibility to many related fields including model distillation, network compression and better representation learning. We are actively exploring on these topics and hope this submission may contribute to ICLR community.", "Thanks for your interest in our work! However, we cannot agree with your comments. We have addressed your questions and concerns below:\n\n1. As introduced in Sec. 1 and concluded by all reviewers, this work aims to \"train a single neural network executable at different widths for different devices\" \nWe never claim \"training runtime is the key problem\". And our focus is not on \"training a single network\" but on \"a single network executable at different widths\". The testing runtime and flexible accuracy-efficiency trade-offs are what we care. \n2. In Table 3 for ImageNet classification, the top-1 accuracy is actually improved by 0.5 in total.\n\n3. Although all experiments are conducted with same settings for both individual and slimmable models, we also noticed that the reproduced performance of individual models was lower than original papers. A potential reason is included in Appendix B of the first submitted version (original *-RCNN papers use ResNet-50 with strides on 1x1 convolution, while we follow PyTorch official implemented ResNet-50 with strides on 3x3). After submission, we found a recently released detection framework MMDetection [1] that has settings for pytorch-style ResNet-50. Thus we have conducted another set of detection experiments and included the results in Appendix C (same mAP is reproduced, for example, Faster-R-50-FPN-1x with 36.4 mAP).\nAnd our conclusion still holds: on detection tasks, slimmable models have better performance than individually trained models, especially for small models. Specifically, for 0.25x models, slimmable network has 2.0+ mAP, which is indeed significant. For 1.0x models, slimmable also have 0.4+ mAP, 0.7+ mAP for Faster-RCNN and Mask-RCNN. We will fully release our code (both training and testing) and pre-trained models on both ImageNet classification and COCO detection sets. \n\n4. Image classification trains models from scratch, while COCO detection fine-tunes pre-trained ImageNet models. The improvement on detection may due to better learned representation of slimmable models on ImageNet when transfer to COCO tasks. We have also mentioned in our submission that it is probably due to implicit distillation and richer supervision. The reason behind the improvements is beyond the motivation of this submission and requires future investigation. We try to avoid strong claims of any deep reason because none of them is strictly proved by us yet.\n\nWe sincerely thank you for posting these concerns and we will always try our best to address them. Please let us know if you have further question or concern. Thanks!\n\n\n[1] https://github.com/open-mmlab/mmdetection", "Thanks for your interest in our work. Our claim is correct: at runtime reducing depth cannot reduce memory footprint.\n\nFor a simple example, consider a layer-by-layer network stacking same convolution layers, the output of layer N can always be placed into the memory of its input after computation, and feed into next layer (N+1). Because at runtime, there is no need to store feature of previous layers generally (in training, they are required for gradient computation).\n\nA good reference is MobileNet v2 paper [1], section 5.1 memory efficient inference. It shows that the memory footprint can be simplified to: M = max_{layer_i \\in all layers} (memory_input of layer_i + memory_output of layer_i).\n\nThe memory footprint M is a MAX operation over all layers, instead of SUM, during inference.\n\n\n[1] Sandler, Mark, et al. “MobileNet v2: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation.\" arXiv preprint arXiv:1801.04381 (2018).", "Thanks for your interest in our work! We have added the citation.", "It is claimed in the 3rd paragraph in introduction that, \n\n \"Nevertheless, in contrast to width (number of channels), reducing depth cannot reduce memory footprint which is commonly constrained during runtime.\"\n\nHowever, in my understanding, the momory reduces linearly when reducing depth for deep neural network. Could you please explain more on this?\n\n", "The motivation to train one model end deploy in multiple devices is quite interesting. However, the experimental results are not convincing. \n\nIn Table 3, most of the S-networks reduce performance compared to their individual counterparts. It's not cumbersome to train individual slimmed model that has higher accuracy in portable device and the same FLOPs as the S-model, since training runtime is not the key problem with increasing amount of computational powers.\n\nIn Table 5, the baselines of R-50-FPN-1× are much lower than those reported in the original paper of Faster R-CNN and Mask R-CNN. In previous work, the box and mask AP of Mask+R-50-FPN-1× are 37.3 and 33.7, while box AP for Faster+R-50-FPN-1× is 36.4. These results are already comparable and even better than the S-networks. The same problem applies to the keypoints. Therefore, it is unclear that S-model would still bring performance gain when the standard baselines are employed.\n\nAnother concern is that S-model seems to degenerate performance in ImageNet, as the paper mentioned \"a slimmable network is expected to have lower performance than individually trained ones intuitively\". But it turns out that the pretrained S-model in ImageNet has large improvement when finetuned in detection and segmentation. This violates common sense.", "The idea is really interesting. One only need to train and maintain one single model, but use it in different platforms of different computational power.\n\nAnd according to the experiment results of COCO detection, the S-version models are much better than original versions (eg. faster-0.25x, from 24.6 to 30.0) . The improvement is huge to me. However the authors do not explain any deep reasons.\n\nAnd for classification, there are slightly performance drop instead of a large improvement which is also hard to understand. \n\nFor detection, experiments on depth-wise convolution based models (such as mobilenet and shufflenet) are suggested to make this work more solid and meaningful.\n\n", "This paper presents a straightforward looking approach for creating a neural networks that can run under different resource constraints, e.g. less computation but lower quality solution and expensive high quality solution, while all the networks are having the same filters. The idea is to share the filters of the cheapest network with those of the larger more expensive networksa and train all those networks jointly with weight sharing. One important practical observation is that the batch-normalization parameters should not be shared between those filters in order to get good results. However, the most interesting surprising observation, that is the main novelty of the work that even the highest quality vision network get substantially better by this training methodology as compared to be training alone without any weight sharing with the smaller networks, when trained for object detection and segmentation purposes (but not for recognition). This is a highly unexpected result and provides a new unanticipated way of training better segmentation models. It is especially nice that the paper does not pretend that this phenomenon is well understood but leaves its proper explanation for future work. I think a lot of interesting work is to be expected along these lines.", "This paper trains a single network executable at different widths. This is implemented by maintaining separate BN parameter and statistics for different width. The problem is well-motivated and the proposed method can be very helpful for deployment of deep models to devices with varying capacity and computational ability.\n \nThis paper is well-written and the experiments are performed on various structures. Still I have several concerns regarding the algorithm.\n1. In algo 1, while gradients for convolutional and fully-connected layers are accumulated for all switches before update, how are the parameters for different switches updated?\n2. In algo 1, the gradients of all switches are accumulated before the update. This may result in implicit unbalanced gradient information, e.g. the connections in 0.25x model in Figure 1 has gradient flows on all four different switches, while the right-most 0.25x connections in 1.0x model has only one gradient flow from the 1.0x switch, will this unbalanced gradient information increase optimization difficulty and how is it solved?\n3. In the original ResNet paper, https://arxiv.org/pdf/1512.03385.pdf, the top-1 error of RestNet-50 is <21% in Table 4. The number reported in this paper (Table 3) is 23.9. Where does the difference come from? ", "Nice work! I had a paper published at CVPR 2018 on training convolutional networks that support instant and adaptive accuracy-efficiency trade-offs at runtime, via early downsampling rather than networking slimming. My paper also includes a similar technique of using independent BatchNorm parameters (just means and stds in my paper, whereas you \"unshare\" all of BatchNorm parameters) for different trade-off configurations. \n\nI'd appreciate if you would include a reference to it - \"Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks\". Thanks.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9, 7, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, -1 ]
[ "iclr_2019_H1gMCsAqY7", "Skxg-ZFjkE", "SyeIKWl11N", "iclr_2019_H1gMCsAqY7", "HJgcCEmjCQ", "iclr_2019_H1gMCsAqY7", "r1xT_6ICTX", "rklrgxdT37", "H1lytDzo2Q", "BJez1UBa27", "r1gelBGgTm", "ryl2OdD8p7", "Byg-61r4n7", "iclr_2019_H1gMCsAqY7", "iclr_2019_H1gMCsAqY7", "iclr_2019_H1gMCsAqY7", "iclr_2019_H1gMCsAqY7", "iclr_2019_H1gMCsAqY7", "iclr_2019_H1gMCsAqY7" ]
iclr_2019_H1gR5iR5FX
Analysing Mathematical Reasoning Abilities of Neural Models
Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test spits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.
accepted-poster-papers
Pros: - A useful and well-structured dataset which will be of use to the community - Well-written and clear (though see Reviewer 2's comment concerning the clarity of the model description section) - Good methodology Cons: - There is a question about why a new dataset is needed rather than a combination of previous datasets and also why these datasets couldn't be harvested from school texts directly. Presumably it would've been a lot more work but please address the issue in your rebuttal. - Evaluation: Reviewer 3 is concerned that the evaluation should perhaps have included more mathematics-specific models (a couple of which are mentioned in the text). On the other hand, Reviewer 2 is concerned that the specific choices (e.g. "thinking steps") made for the general models are non-standard in seq-2-seq models. I haven't heard about the thinking step approach but perhaps it's out there somewhere. It would be helpful generally to have more discussion about the reasoning involved in these decisions. I think this is a useful contribution to the community, well written and thoughtfully constructed. I am tentatively accepting this paper with the understanding that you will engage directly with the reviewers to address their concerns about the evaluation section. Please in particular use the rebuttal period to focus on the clarity of the model description and the motivation for the particular models chosen. Also consider adding additional experiments to allay the concerns of the reviewers.
train
[ "r1eF1-xjh7", "SylO9VV507", "S1gN_2Yl0m", "HkxKS2FxR7", "BJeVJhFxRX", "H1goRfkKnm", "SJeDlWsjj7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new synthetic dataset to evaluate the mathematical reasoning ability of sequence-to-sequence models. It consists of math problems in various categories such as algebra, arithmetic, calculus, etc. The dataset is designed carefully so that it is very unlikely there will be any duplicate between train/test split and the difficulty can be controlled. Several models including LSTM, LSTM + Attention, Transformer are evaluated on the proposed dataset. The result showed some interesting insights about the evaluated models. The evaluation of mathematical reasoning ability is an interesting perspective. However, the un-standard design of the LSTM models makes it unclear whether the comparisons are solid enough. \n\nThe paper is relatively well-written, although the description of the neural models can be improved. \n\nThe generation process of the dataset is well thought out. The insights from the analysis of the failure cases are intriguing, but it also points out that the neural networks models are not really performing mathematical reasoning since the generalization is very limited. \n\nOne suggestion is that it might be useful to also release the structured (parsed) form besides the freeform inputs and outputs, for analysis and for evaluating structured neural network models like the graph networks. \n\nMy main concerns are about the evaluation and comparison of standard neural models. The use of “blank inputs (referred to as “thinking steps”)” in “Simple LSTM” and “Attentional LSTM\" doesn’t seem to be a standard approach. In the attentional LSTM, the use of “parse LSTM” is also not a standard approach in seq2seq models and doesn’t seem to work well in the experiment (similar result to “Simple LSTM\"). I think these issues are against the goal of evaluating standard neural models on the benchmark and will raise doubts about the comparison between different models. \n\nWith some improvements in the evaluation and comparison, I believe this paper will be more complete and much stronger. \n\ntypo:\npage 3: “freefrom inputs and outputs” -> “freeform inputs and outputs”\n", "Thanks for the response. The description of the models is indeed improved. I have updated my ratings accordingly. I think a structured form (no matter which exact form is used) will be generally easier to use than tailoring the code, but I leave this decision to the authors. ", "Thank you for your suggestion of increasing the discussion of the results. We’ve expanded the discussion of the results as much as possible. For now we would prefer to keep the actual bar plot of individual module performance in the appendix in the interest of space, and keep the dataset description in the main part, as this was appreciated by the other two reviewers.\n\nAs you say, the ability to generalize is very important in mathematics. The paper contains an extrapolation test set to do exactly this - these include generalization tests on larger numbers, longer sequences, more function compositions (which is similar to having more variables), etc (see Appendix B for more details). We haven’t attempted to be exhaustive in types of generalization, but the extrapolation test set can be extended in the future to allow for this.\n\nNone of the modules currently include “unsolvable” as an answer, but this is something that would definitely fit within the framework. (As an aside: there would be no need to have a special character; we could simply select some consistent word like “Unsolvable”; neural models trained so far seem to have no problem outputting “True” or “False”.) More generally, there are many further types of problems, that could be included in the dataset - but we hope for now that the current range is comprehensive in types of reasoning required for school-level mathematics. We always welcome contributions to the dataset that extend the range of questions in a consistent manner.", "Thank you for pointing out the other datasets in algebraic word reasoning. We’ve included these in an expanded discussion of related work with discussion on how they relate to the current dataset. Please let us know if we have missed other papers.\n\nYour proposal of combining multiple extant problem sets is a good idea. We’d want to ensure the combined datasets have a common format (e.g., the same unambiguous freeform text format for reasons of transferability, etc as argued in the paper), and there are interesting problem types occurring in other datasets (such as logical entailment or boolean satisfiability) that we haven’t yet included. We may in the future extend the dataset to include these other problem types if the current ones become solved, and of course we solicit contributions (in the form of generation code) to the dataset.\n\nWe likely could not use workbooks etc as a source for problems without significant investment, since obtaining legal permission to redistribute copyrighted problems found in these books would probably be hard and/or expensive. Having said that, it is definitely important to ensure the problems remain grounded in real-life problems (thus our small list of real-life exam questions). This was the motivation for testing trained models against “real life” questions occurring in school-level examinations; these questions are not intended to be a primary benchmark (with more questions and detailed grades), but rather simply a rough indication of whether training models to answer school-level questions could be achievable.\n\nOn the distribution of the sampled answer (and the related question of how difficulty levels are determined), these are great questions. For some modules with two output choices (e.g., True, False), we can simply split the answers 50-50. But in general, the answer distribution depends on the module, with hand-tuning to ensure the (question, answer) pair is of a reasonable difficulty level as judged by humans. In more detail: as mentioned in the paper, we want to achieve upper bounds on the maximum probability that any single (question, answer) is sampled; thus if we sample the answer from a set of N possible answers, then to achieve a maximum probability p of a given question, the remaining choices made in generating the question must be from a set of size p/N. We roughly aim to pick N (depending on p) so that conditioned on this, the question is as easy as possible; there is typically a hand-tuned sweet spot.\n\nOn evaluating general-purpose models only, we may have phrased this badly in the paper, and have updated it. We are definitely interested in any models that learns to do mathematics and symbolic reasoning, which would include more sophisticated models tailored towards doing mathematics (one could imagine models with working memory, etc). However, we discount models that already have their mathematics knowledge inbuilt rather than learnt (for example, this includes many of the models that occur in algebraic reasoning tasks, where the model learns to map the input text to an existing equation template, that is then solved by a fixed calculator). We test DNC (differentiable neural computers) and RMC (relational memory core) models, which arguably are more specialized for doing mathematics, since they have a slot-based memory that may be appropriate for storing intermediate results. However these models obtained worse performance than the more general architectures, and we are not yet aware of models that are more tailored for doing mathematics that do not simply have their mathematics knowledge built-in and unlearnable; we hope the dataset will spur the development of new models along these lines.\n\nOn the number of thinking steps, in our earlier analysis we trained up to 150k steps (compared with 500k for final performance reported in paper), and observed the following interpolation test performances by number of steps: 39% (0 steps), 46% (1 step), 48% (2), 49% (4), 50% (8), 51% (16). We are re-running experiments now to confirm the final performances, which we can include in the final paper.\n", "Thank you for your detailed review.\n\nOn releasing a structured (parsed) form of the dataset: we agree that examining performance on structured input is a very useful exploration direction, that can give insight into what effect parsing has on ease of training. We feel, however, that there’s no single canonical choice for the structure that may be suitable for all types of networks (e.g., tree networks, graph networks, etc), or different levels of structure that aid the network to different amounts, from completely unstructured to tree-like structures that essentially determine the required order of calculation. For example, in the question type of “multiple function composition”, one could have a structure that lists the functions, and also the desired composition order; or one could actually have a tree structure with the functions already embedded in the correct composition order (which we suspect would be quite easy to learn models on). In lieu of this, we hope the released dataset source code will allow researchers to easily tailor the dataset to their specific problems and models.\n\nWe have rewritten the section describing the neural models, with clearer terminology, and the differences between the different models made much more explicit. Thank you for pointing this out, and please let us know if any parts are still unclear. The “attentional LSTM” model is just the standard encoder/decoder+attention architecture prevalent in neural machine translation as introduced in “Neural machine translation by jointly learning to align and translate” (Bahdanau et al). However, we confusingly used the terms “parser” instead of “encoder”, and we have fixed the description.\n\nOn running the decoding LSTM for a few steps before outputting the answer: we found that it was one of the few (relatively simple) architectural changes to the standard recurrent encoder/decoder setup that significantly helped performance (thus the performance on the standard architecture can be taken to be slightly worse than the numbers reported in the paper for the architecture with “thinking steps”), but we also realize that it is not a widespread architectural change. (Possibly the need for this is less in standard machine translation tasks.) Since your review, we have also ran experiments using the published architecture introduced in “Adaptive Computation Time for Recurrent Neural Networks” (Graves). This architecture has an adaptive number of “thinking” steps at every timestep dependent on the input, learnt via gradient descent. More specifically we investigated the use of this for both the recurrent encoder and decoder (replacing the single fixed number of “thinking” steps at the start of the decoder). After some tuning, its test performance was still around 3% worse than the same architecture without adaptive computation time. We’ve updated the paper to mention this.\n\nPlease refer to the updated PDF of the paper to see these changes. We hope that you will agree that, with your kind feedback, the changes above strengthen the paper's claims and clarity, and that you are willing to reconsider your assessment on these grounds.", "Summary: This paper is about models for solving basic math problems. The main contribution is a synthetically generated dataset that includes a variety of types and difficulties of math problems; it is both larger and more varied than previous datasets of this type. The dataset is then used to evaluate a number of recurrent models (LSTM, LSTM+attention, transformer); these are very powerful models for general sequence-sequence tasks, but they are not explicitly tailored to math problems. The results are then analyzed and insights are derived explaining where neural models seemingly cope well with math tasks, and where they fall down. \n\nStrengths: I am happy to see the proposal of a very large dataset with a lot of different axes for measuring and examining the performance of models. There are challenging desiderata involved in building the training+tests sets, and the authors have an interesting and involved methodology to accomplish these. The paper is very clearly written. I'm not aware of a comparable work, so the novelty here seems good.\n\nWeaknesses: The dataset created here is entirely synthetic, and the paper only includes one single small real-world case; it seems like it would be easy to generate a larger and more varied real world dataset as well (possibly from the large literature of extant solved problems in workbooks). It would have been useful to compare the general models here with some specific math problem-focused ones as well. Some details weren't clear to me. More in the comments below.\n\nVerdict: I thought this was generally an interesting paper that has some very nice benefits, but also has some weaknesses that could be resolved. I view it as borderline, but I'm willing to change my mind based on the discussion.\n \n \nComments:\n\n- One area that could stand to be improved is prior work. I'd like to see more of a discussion of *prior data sets* rather than papers proposing models for problems. Since this is the core contribution, this should also be the main comparison. For example, EMLNP 2017 paper \"Deep Neural Solver for Math Word Problems\" mentions a size 60K problem dataset. A more extensive discussion will help convince the readers that the proposed dataset is indeed the largest and most diverse.\n\n- The authors note that previous datasets are often specific to one type of problem (i.e., single variable equation solving). Why not then combine multiple types of extant problem sets? \n\n- The authors divide dataset construction into crowdsourcing and synthetic. This seems incomplete to me: there are tens of thousands (probably more) of exercises and problems available in workbooks for elementary, middle, and high school students. These are solved, and only require very limited validation. They are also categorized by difficulty and area. Presumably the cost here would be to physically scan some of these workbooks, but this seems like a very limited investment. Why not build datasets based on workbooks, problem solving books, etc? \n\n- How do are the difficulty levels synthetically determined?\n\n- When generating the questions, the authors \"first sample the answer\". What's the distribution you use on the answer? This seems like it dramatically affects the resulting questions, so I'm curious how it's selected.\n\n- The general methodology of generating questions and ensuring that no question is too rare or too frequent and the test set is sufficiently different---these are important questions and I commend the authors for providing a strong methodology.\n\n- I didn't understand the motivation for testing only very general-purpose models (this is described in Section 3). This is certainly a scientific decision, i.e., the authors are determining which models to use in order to determine the possible insights they will derive. But it's not clear to me why testing more sophisticated models that are tailored for math questions would *not* be useful. In fact, assuming that such methods outperform general-purpose models, we could investigate why and where this is the case (in fact the proposed dataset is very useful for this). On the other hand, if these specialized approaches largely fail to outperform general-purpose models, we would have the opposite insights---that these models' benefits are dataset-specific and thus limited. \n\n- Really would be good to do real-world tests in a more extensive way. A 40-question exam for 16 year olds is probably far too challenging for the current state of general recurrent models. Can you add some additional grades here, and more questions?\n\n- For the number of thinkings steps, how does it scale up as you increase it from 0 to 16? Is there a clear relationships here?\n\n- The 1+1+...+1 example is pretty intriguing, and could be a nice \"default\" question!\n\n- Minor typo: in the abstract: \"test spits\" should be \"test splits\"\n", "This paper develops a framework for evaluating the ability of neural models on answering free-form mathematical problems. The contributions are i) a publicly available dataset, and ii) an evaluation of two existing model families, recurrent networks and the Transformer. \n\nI think that this paper makes a good contribution by establishing a benchmark and providing some preliminary results. I am biased because I once did exactly the same thing as this paper, although at a much smaller scale; I am thus happy to see such a public dataset. The paper is a reasonable dataset/analysis paper. Whether to accept it or not depends on what standard ICLR has towards such papers (ones that do not propose a new model/new theory).\n\nI think that the dataset generation process is well-thought-out. There are a large variety of modules, and trying to not generate either trivial or impossible problems is a plus in my opinion. The results and discussions in the main part of the paper are too light in my opinion; the average model accuracy across modules is not an interesting metric at all, although it does show that the Transformer performs better than recurrent networks. I think the authors should move a portion of the big bar plot (too low resolution, btw) into the main text and discuss it thoroughly. Details on how to generate the dataset, however, can be moved into the appendix. I am also not entirely satisfied by using accuracy as the only metric; how about using something like beam search to build a \"soft\", secondary metric?\n\nOne other thing I want to see is a test set with multiple different difficulty levels. The authors try to do this with composition, which is good, but I am not sure whether that captures the real important thing - the ability to generalize, say learning to factorise single-variable polynomials and test it on factorising polynomials with multiple variables? And what about the transfer between these tasks (e.g., if a network learns to solve equations with both x and y and also factorise a polynomial with x, can it generalize to the unseen case of factorising a polynomial with both x and y)? Also, is there an option for \"unsolvable\"? For example, the answer being a special \"this is impossible\" character for \"factorise x^2 - 5\" (if your training set does not use \\sqrt, of course)." ]
[ 7, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_H1gR5iR5FX", "BJeVJhFxRX", "SJeDlWsjj7", "H1goRfkKnm", "r1eF1-xjh7", "iclr_2019_H1gR5iR5FX", "iclr_2019_H1gR5iR5FX" ]
iclr_2019_H1gTEj09FX
RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks
Explicit encoding of group actions in deep features makes it possible for convolutional neural networks (CNNs) to handle global deformations of images, which is critical to success in many vision tasks. This paper proposes to decompose the convolutional filters over joint steerable bases across the space and the group geometry simultaneously, namely a rotation-equivariant CNN with decomposed convolutional filters (RotDCF). This decomposition facilitates computing the joint convolution, which is proved to be necessary for the group equivariance. It significantly reduces the model size and computational complexity while preserving performance, and truncation of the bases expansion serves implicitly to regularize the filters. On datasets involving in-plane and out-of-plane object rotations, RotDCF deep features demonstrate greater robustness and interpretability than regular CNNs. The stability of the equivariant representation to input variations is also proved theoretically. The RotDCF framework can be extended to groups other than rotations, providing a general approach which achieves both group equivariance and representation stability at a reduced model size.
accepted-poster-papers
This paper builds on the recent DCFNet (Decomposed Convolutional Filters) architecture to incorporate rotation equivariance while preserving stability. The core idea is to decompose the trainable filters into a steerable representation and learn over a subset of the coefficients of that representation. Reviewers all agreed that this is a solid contribution that advances research into group equivariant CNNs, bringing efficiency gains and stability guarantees, albeit these appear to be incremental with respect to the techniques developed in the DCFNet work. In summary, the AC believes this to be a valuable contribution and therefore recommends acceptance.
train
[ "BklM-VfchQ", "Skxq9HUa0X", "HkxDLKH5R7", "HyxEmhb7Am", "ByejiBuhT7", "ByxMFYg8aX", "SkeOE1Hc37" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "This work extends on [1] by constructing CNN filters using Fourier-Bessel (FB) bases for rotation equivariant networks. Additionally to [1] it extends the process with using SO(2) bases which allow to learn combination of rotated FB bases and ultimately achieve good performance with less parameters than standard CNN networks thanks to filter truncation.\n\nIn general, this work is well written and shows interesting results. However it lacks context with regards to other existing works. For example [2] also uses steerable filters for achieving rotation equivariance, however with different steerable bases (rotation harmonics instead of FB). It would be useful to clarify why FB bases are more appropriate for truncation, eventually providing empirical evidence (even though rotation harmonics would probably need more parameters). Authors mention [2], however disregard it due to computational complexity, which would be the same if the rotation harmonics bases were truncated as well.\n\nSimilarly, this work is not strong in evaluating against existing methods. It provides evaluation of the vanilla group equivariant networks in a similar configuration, but due to design choices in the training and test set, it is not possible to compare it against other algorithms and other steerable bases such as those from [2]. This degrades the results slightly as it does not allow to verify the baseline results from other works.\n\nAdditionally, it would be useful to provide an ablation study which would show how important the bases in SO(2) are important for the model accuracy. This would allow to compare the results against the [1] as the FB filters are steerable as well (Equation 4).\n\nIt is hard to reach a final rating for this submission. On one hand, it can be seen as an incremental improvement of [1] for a new domain of tasks, without a thorough comparison against existing methods. On the other hand, the paper is well written and the results look promising - evaluation verifies that the algorithm performs well in multiple tasks with a fraction of parameters.\n\nConsidering that authors plan to release the source code and that this conference aims for publishing novel ideas (and the goal of this work is to achieve rotation equivariance with less parameters, which hasn't been tackled before), I am inclined towards acceptance of this paper, even though the experiments can be significantly improved.\n\nUnfortunately, I was not able to verify correctness of the provided proofs.\n\nAdditional minor issues:\n* The paper does not specify what FB bases exactly are being used (such as in [table 1;1]), mainly it does not seem to specify the SO(2) bases.\n* It would be useful to visualise K and K_\\alpha in Figure 1.\n* Citations, if not part of the sentence, should be in parentheses to improve readability (\\citep for natbib).\n* On page 8, end of first paragraph - wrong reference (see S.M.)\n* L, in section 2.3 is not defined.\n\n[1] Qiu, Qiang, et al. \"DCFNet: Deep Neural Network with Decomposed Convolutional Filters.\", ICML 2018\n[2] Weiler, Maurice, et al. “Learning Steerable Filters for Rotation Equivariant CNNs.” CVPR 2018\n", "I would like to thank the authors for a thorough response and for clarification of a few points which I have missed. Additionally, I believe the added experiments (mainly Table A.3 and addition of the SFCNN results) further verify the generality of the proposed method.\n\nEven though the SFCNN provides slightly better results, authors correctly claim that it does with an increased model complexity. However with this I would like to encourage the authors, in line with R2's points #2 and #3 to provide more details regarding the wall clock speed of the current implementation of the algorithm versus the selected hyper-parameter selection. I believe this should not influence the decision of acceptance, as the complexity is proven theoretically, but it might shed light on the limitations of the current implementation (as mentioned in the conclusions).\n\nFew minor niggles:\n- The last sentence in paragraph 1.1 (added in later revisions) sounds relatively vague - how special?\n- When citing as part of a sentence, the citation should not be in parentheses (e.g. As shown in Darwin, 1859 (\\citet); vs Evolution (Darwin, 1859) is a ... (\\citep)). This is purely nit-picking but I believe it helps readability.", "I thank the authors for diligently addressing the points raised by the reviewers. I am satisfied with the authors revisions and it has made their paper more favorable for acceptance.", "We thank the reviewer for the reading, the supportive comments as well as the valuable suggestions.\n\nR2 comments on the combination of the two previously published methods: DCF and SFCNN (Qiu et al. 2018 & Weiler et al. 2017). In fact, one of our main focuses is to design a principled and elegant way to improve steerable-filter CNN like SFCNN, by exploiting the joint filter decomposition over the product geometry of R^2 x S^1 simultaneously, adopting the product steerable bases FB bases x Fourier bases. This design is highly non-trivial, and it achieves multiple desirable properties at the same time, including rotation-equivariance, lower computational complexity, fewer parameters, robustness and provable representation stability. The theoretical result is also new and differs from that in DCF - as now we need stability not just against input deformation but \"module\" the group action - and the theory is supported by experiments which involve image rotations. As suggested, we have added an explanation of the technical challenges in the related work section.\n\nR2 also raises the question of \"How to determine truncation and set parameters in practice, and how it affects performance, computational complexity and number of parameters?\". At the moment, the choices of those hyper-parameters are mostly empirical, example values are given in Section 2.3 and in Table 3 & Table A.3. In the two tables, it is shown that the performance is not very sensitive to the choice of these parameters over a range, e.g., for rotMNIST, the accuracy maintained 98.40~98.60 for various choices of K and K_alpha down to 3 and 5. The parameter choice of K and K_alpha is interpreted as the \"frequency\" truncation (both in R^2 and in S^1), thus it trades-off between filter regularization and filter expressiveness. The choice of N_theta does not affect the number of parameters but affects storage. To better clarify all this, an explanation is added in Appendix A. The reduction factor from non-bases equivariant CNN to RotDCF is added in Section 2.3.\n\nWe have also corrected the suggested notation issues. Thanks again for reading!", "Summary:\nThis paper combines the benefits of using joint steerable filters (using the SO(2) group) for designing rotation-equivariant CNNs with those of decomposing the filters (using Fourier-Bessel bases) for reducing the computational complexity. In addition, this leads to a compressed model and filter regularization. The authors give theoretical guarantees on the rotation equivariance and representation stability with respect to in and out of plane rotation. Empirical results show that the model attains better accuracy compared to CNNs and non-rotation-equivariant deep networks while using fewer parameters and also performs similarly to a rotation-equivariant model with much bigger capacity.\n\nPros:\n- Theoretical guarantees, elegant approach\n- Good empirical results compared to other models\n- Desirable properties: rotation-equivariance, lower computational complexity, fewer parameters, robustness and guaranteed stability to deformations\n\nCons:\n- Somewhat incremental technical novelty: combination of two previously published methods (Qiu et al. 2018 & Weiler et al. 2017)\n\nComments:\n1. I believe the related work section can be improved by explaining more clearly the connection between your work and the cited ones and emphasizing the advantages and limitations of RotDCF compared to other methods In particular, a reader should be able to precisely understand what is the novelty of this work is and what were the technical challenges in combining previously published ideas (such as DCF and SFCNN) \n2. How do you determine the truncation in practice? How robust is the method to this choice? What are the trade-offs between using a value that is too low or too high? It would be interesting to show how performance and complexity vary with this parameter\n3. It would also be helpful to have a discussion on choosing the parameters K_{alpha} and N_{theta} and how this affects the performance, computational complexity and number of parameters. This would provide more intuition on the limits of this method and the types of data it can be used for\n4. In section 2.3, it would be helpful to specify an estimated range for the parameter reduction from the non-bases rotation-equivariant CNN to RotDCF (similar to the ½ factor from RotDCF to regular CNN) \n5. Eq. (4) seems to be missing the definition of R_{m,q}\n6. The notation for the supplementary material was confusing at times. I would suggest using the more standard notation for the appendix which can also be a more specific reference (e.g. A.1, A.2, etc.)\n\n\n\n\n\n", "We would like to thank the reviewers for reading our paper and giving valuable feedback. Please see below for our response, and the manuscript is also updated.\n\nOne common question raised by both reviewers is \"to compare to the latest group-equivariant deep networks\". In the revised version, we update the experiment on rotMNIST in Table 3, Page 6, including a comparison to the result in SFCNN [2] as suggested by R1. The performance of RotDCF is comparable to the more computationally expensive group-equivariant networks, even with bases truncation which reduces the model complexity.\n\nTo answer the other questions of R1:\n\n- About Fourier Bessel (FB) bases: \"what bases are being used\" and \"compare to rotation harmonics\":\n\nThe FB bases used in the paper is the standard one (Abramowitz & Stegun 1964), and a formula is added in Sec. 2.2, new Eqn (4). It is the same FB bases used in [1], however, [1] considers usual CNN rather than rotation equivariant ones and does not exploit the steerable property of FB. Also note that the “joint steerable bases” of psi (FB for space) and phi (Fourier for orientation) are used together here to decompose the “joint convolution” Eqn. (1) (2) - the scheme is proved to be necessary for the group-equivariant property.\n\nCompare to rotation harmonics [2], FB bases are orthonormal and the truncation of FB base has a frequency interpretation - it preserves the low-frequency components and discards the high-frequency ends. This is crucial for the regularization effect (Figure 3 and Figure A.1) of using truncated FB bases. Theoretically, the properties of FB bases are also key elements upon which the representation stability can be proved.\n\n- \"An ablation study and compare with [1]\":\n\nThe rotMNIST experiment in Table 3 provides a comparison to DCF [1], which uses FB bases to decompose filters but is not designed to be rotation-equivariant: DCF gives similar performance to regular CNN, but inferior to RotDCF and [2]. This shows the importance of rotation-equivariant design in the network when handling input rotations.\n\nThe other minor points:\n* K and Kalpha are added in the plot of Figure 1.\n* Citations in parentheses now.\n* Page 8: \"S.M.\" refers to the Supplementary Material, defined on Page 3.\n* A definition of L added in Section 2.3. L is the width/height of the convolutional filter, as visualized in Figure 1.\n\nThanks again for the reading!", "Group-equivariant deep networks are used as a solution for rotation-equivariance in CNNs. However, they are computationally expensive as the number of filters increases by a factor proportional to the number of groups. Inspired by ideas of filter decomposition used in CNN model compression, the authors of this work instead propose to use steerable filters across space and rotation, as basis filters for achieving rotation-equivariance, which leads to computational efficiency. \n\nThe authors show improved accuracy and model compression with their proposed approach versus regular CNNs for several different tasks (MNIST, CIFAR, autoencoders and face recognition) for rotated and upright images.\n\nFurthermore the authors theoretically prove and demonstrate empirically (via multiple experiments) the group equivariance property and the representational stability under input variations of their proposed architecture. \n\nThe work is novel and it solves an open research problem.\n\nHowever, the one major criticism of the work is that in the experimental section, especially for the rotated MNIST and rotated face recognition tasks, the authors should compare the accuracy of their method with the latest state-of-the-art group-equivariant deep networks instead of just regular CNNs. This will help to truly understand whether their method is superior or comparable to the more computationally expensive group-equivariant networks that are specifically designed to handle rotations in terms of accuracy as well or not. The regular CCNs, which are not designed to handle rotations, are obviously bound to be inferior to their approach.\n\n" ]
[ 7, -1, -1, -1, 7, -1, 7 ]
[ 3, -1, -1, -1, 2, -1, 4 ]
[ "iclr_2019_H1gTEj09FX", "BklM-VfchQ", "SkeOE1Hc37", "iclr_2019_H1gTEj09FX", "iclr_2019_H1gTEj09FX", "iclr_2019_H1gTEj09FX", "iclr_2019_H1gTEj09FX" ]
iclr_2019_H1gfOiAqYm
Execution-Guided Neural Program Synthesis
Neural program synthesis from input-output examples has attracted an increasing interest from both the machine learning and the programming language community. Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax. Although such approaches achieve a reasonable performance on simple tasks such as FlashFill, on more complex tasks such as Karel, the state-of-the-art approach can only achieve an accuracy of around 77%. We observe that the main drawback of existing approaches is that the semantic information is greatly under-utilized. In this work, we propose two simple yet principled techniques to better leverage the semantic information, which are execution-guided synthesis and synthesizer ensemble. These techniques are general enough to be combined with any existing encoder-decoder-style neural program synthesizer. Applying our techniques to the Karel dataset, we can boost the accuracy from around 77% to more than 90%.
accepted-poster-papers
This paper presents a system which exploits semantic information of partial programs during program synthesis, and ensembling of synthesisers. The idea is general, and admirably simple. The explanation is clear, and the results are impressive. The reviewers, some after significant discussion, agree that this paper makes an import contribution and is one of the stronger papers in the conference. While some possible improvements to the method and experiment were discussed with the reviewers, it seems these are more suitable for future research, and that the paper is clearly publishable in its current form.
train
[ "HkeaqiiCCX", "HJxHhAJt0Q", "SJxSoQpu07", "Bkea3ghuRQ", "SkxQ9icORQ", "Byg6or5chQ", "SyxNtmfzRX", "HylIAzMf07", "SyxTLzMzAQ", "B1e46-fMRX", "HkegUbzfCX", "B1lJT1zzRQ", "SyldTAA6nm", "Byg4unw53X", "SJlrMs41p7" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "This is an important part of ICLR's review system, and of the scientific process as a whole, so your engagement is noted and appreciated.", "Thank you for your explanation! Unfortunately, we do not have enough time to implement these ideas and report the results before the end of the rebuttal period. But we believe these techniques are orthogonal to be applied to further improve our main techniques. We will try gradient boosting and include the results in our camera-ready version. Our ensemble approach is not differentiable by itself, but we will consider extending the current ensemble approach for training as future work.", "Thanks for responding how beam search is an efficient and effective way of searching for/generating interesting points in the space of programs.\n\nAs for training an ensemble, there are at least two ways of doing it in your setup.\n1. The \"gradient boosting\" way (https://en.wikipedia.org/wiki/Gradient_boosting) where you iteratively train a new model to fill-in the gaps of the current ensemble.\n2. If you had a differentiable ensemble voting mechanism (e.g. average all the networks predictions), then the whole ensemble model would behave like your current base model (as you would be able to compute its log-likelihood).\n", "Thanks a lot for your update! We are very glad to see that our revision addressed your concern, and thank you again for your constructive suggestions on improving our paper!", "Seeing your reply and the revisions made in the paper, I am more than happy to increase my score.\n\nEdited: Just to clarify it, the largest weight of my score was on the issue of the ensemble contribution. Seeing how you clearly outlined the added bonus of being able to verify the correctness of the synthesised program to improve ensembling, you made the issue go away, hence the score increase.", "This paper proposes guiding program synthesis with information from partial/incomplete program execution. The idea is that by executing partial programs, synthesizers can obtain the information of the state the (partial) program ended in and can, therefore, condition the next step on that (intermediate) state. The paper also mentions ensembling synthesizers to achieve a higher score, and by doing that it outperforms the current state-of-the-art on the Karel dataset program synthesis task.\n\nIn general, I like the idea of guiding synthesis with intermediate executions, and the evaluation in the paper shows this does make sense, and it outperforms the SOTA. The idea is original and the evaluation shows it is significant (enough). However, I have two major concerns with the paper, its presented contribution, and the clarity.\n\nFirst, I cannot accept ensembling as a contribution to this paper. There is nothing novel about the ensemble proposed, and ensembling, as a standard method that pushes models that extra few percentage points, is present in a lot of other research. I have nothing against achieving SOTA results with it, while at the same time showing that the best performing model outperforms previous SOTA, which this paper orderly does. However, I cannot accept non-novel ensembling as a contribution of the paper.\n\nSecond, the clarity of the paper should be substantially improved:\n- my main issue is that it is not clear how the Exec algorithm (see next point too) is trained. From what I understand Exec is trained on supervised data via MLE. What is the supervised data here? Given the generality claims and the formulation in Algorithm 1/2, and possible ways one could use the execution information, as well as the fact that the model should be end-to-end trainable via MLE, it seems to me that the model is trained on prefixes (defined by Algorithm 1/2) of programs. Whether this is correct or not, please provide full details on how one can train Exec without using RL.\n- By looking at Table 3, it seems that the generalization boost coming from Exec (I’m ignoring ensembling) is higher enough, and that’s great. However, it’s obvious that the exact match gain by Exec is minute, implying that the proposed algorithm albeit great on the generalization metric, does not improve the exact match at all. Do you have any idea why is that? Is that because Exec is trained via MLE and the Exec algorithm doesn’t add anything new to the training procedure?\n- how do algorithm 1 and 2 exactly relate? I guess there is a meaning of ellipses in Lines 1 and 13, however, that is not mentioned anywhere. Is the mixture of algorithm 1 and 2 (and a non-presented algorithm for while loops) the Exec algorithm? How exactly are these algorithms joined, i.e what is the final algorithm?\n- while on one side, I find some formalizations (problem definitions, definition 1, semantic rules in table 2) nicely done, I do not see their necessity nor big gains from them. In my opinion, the understanding of the rest of the paper does not depend on them, and they are well-described in the text.\n- the paper says that the algorithm “helps boost the performance of different existing training algorithms”, however, it does so only on the Bunel et al model (and the MLE baseline in it), and albeit there’s mention of the generality, it has not been shown on anything other than those two models and the Karel dataset.\n- do lines 6-7 in Algorithm 2 recurse? Does the model support arbitrarily nested loops/if statements?\n- The claim that the shortest principle is most effective is supported by 2 data points, without any information on the variance of the prediction/dependence on the seed. Did you observe this for #models > 10 too? Up to what number?\n- In table 3, is Exec on MLE? Could you please, for completeness, present the results of Exec + RL + ensemble in the table too?\n- summarization, point 3 - what are the different modules mentioned here? Exec/RL/ensemble?\n\nMinor issues, remarks, typos:\n- table 1 position is very unfortunate\n- figure 1 is not self-explanatory - it takes quite a lot of space to explain the network architecture, yet it fails to deliver meaning to parts of it (e.g. what is h_t^x, why is it max-pooled, what is g_t, etc)\n- abstract & introduction - “Reducing error rate around 60%” absolute percentage points seem like a better evaluation measure (that the paper does use). Why is the error rate reduction necessary here?\n- figure 2 - why is the marker in one of the corners, and not in the cell itself?\n- Algorithm 1, step 4, is this here just as initialization, so S is non-empty to start with?\n- Table 2 rule names are unclear (e.g. S-Seq-Bot ?)\n- Table 3 mentions what Exec indicates twice", "We thank all reviewers for the constructive feedbacks! We have revised the paper with the following major changes to incorporate the comments:\n\n- We revise the introduction to better explain why we emphasize our new ensemble technique.\n\n- We add a description in the caption of Figure 1 to better explain existing input-output neural program synthesis architectures.\n\n- We rewrite Algorithm 1 and 2 to make them more formal and eliminate the confusions.\n\n- In Section 5.1, we add explanations about training set construction and training algorithms to make our evaluation setup more precise.\n\n- We reorganize Table 3 to more clearly separate different components of each approach. Meanwhile, we also add more results and explanations in Section 5.2 for completeness.\n\nThe paper is inevitably longer after adding more content. Now it is 9 pages, which is more than the recommended page number of 8. We think it is helpful for readers to better understand our paper, but if the reviewers have concerns about its length, we are happy to defer certain materials into the appendix to fit the main body into 8 pages.\n", "Thank you for pointing out the related paper! This is a very interesting work, and we have discussed it in our revision (see Section 6).", "Thank you for the encouraging comments, and new ideas to evaluate!\n\nFirst, for MCTS, we consider it as yet another training approach in addition to supervised learning (SL) and reinforcement learning (RL), which is orthogonal to our main contribution of the Exec algorithm. In fact, Exec algorithm is designed to be combined with any training algorithm that can effectively train the underlying synthesizer. By evaluating SL and RL, we think we have demonstrated this point. In addition, we have reorganized Table 3 to list existing training algorithms as a separate column to make it clearer that our technique can be applied in different training setups.\n\nOn the other hand, MCTS is effective especially when the ground truth label (or score) is hard to compute for a state (like in the Go game). In our problem, however, we can easily verify whether a generated program satisfies the input-output specification or not. Therefore, a beam search is sufficient to achieve a high accuracy at test time. Also, MCTS typically requires more computation than the beam search approach for inference. Thus, we prefer a beam search algorithm for program synthesis.\n\nSecond, about your comment on ensemble, yes, we only use ensemble at test time. We are unclear about a good way of applying ensemble during training time in our setting, and we would appreciate it if the reviewer can provide more details.\n\nLast but not least, we can easily adapt our model to be stochastic. In particular, the synthesizer could randomly sample program tokens from the softmax output probability distribution, rather than always pick the top-scored tokens as in the beam search. However, in doing so, we found that most generated programs are incorrect. Using our single Exec + RL model with the best performance, we repeat the following experiment 64 times, where 64 is the beam size in our evaluation: for each sample in the testset, we run the stochastic synthesizer described above, and evaluate the overall accuracy. The mean accuracies among all runs are 19.86% (exact match) and 45.15% (generalization), with standard deviations of 0.48% and 0.96% respectively.\n\nWe further evaluate the top-64 accuracy of this stochastic approach in the following way. For each test sample, we keep sampling until either (1) a program that matches the input-output specification is generated; or (2) there have been 64 invalid programs synthesized. In doing so, the accuracies are 39.32% (exact match) and 85.84% (generalization) respectively, which are not better than using the beam search. Therefore, beam search is more effective in our problem.\n\nWe found that these additional experimental results are not crucial to support our main contributions, but if the reviewer thinks they deserve to appear in the paper, we are happy to incorporate them as well.", "Second, we have revised our paper to address your clarity concerns. Here is a detailed list of responses:\n\n- For how to incorporate our Exec algorithm into existing training techniques, we have included the training set construction approach in Section 5.1 with details in Appendix B. After the dataset is constructed, the neural synthesizer can be trained with both supervised learning and reinforcement learning.\n\n- About the minor improvement of the exact match, we have added a detailed explanation of why the Exec algorithm is not designed to optimize the exact-match accuracy, as well as why exact match accuracy is not as important as generalization accuracy for real-world applications. This can be found on page 8, the paragraph starting with “Note that the improvement of Exec on the exact match accuracy is relatively minor”.\n\n- To describe our Exec algorithm more precisely, we rewrite Algorithm 1 and 2 in a more formal way, rather than providing illustrative pseudo-code in the previous version. Now, Algorithm 1 (Exec) includes the condition when ExecIf and ExecWhile are called, and Algorithm 2 (ExecIf) illustrates how it calls Exec to generate the branches. The ExecWhile algorithm is deferred to the appendix.\n\n- About the necessity of our formalizations, we provide the formalization to make the discussion precise, and remove as much confusion as possible. For example, in the previous version, the illustrative style presentation of Algorithm 1 and 2 makes you confused about their precise design. This is also the reason why we turn them into more formal ones. We believe the formalization helps clarify potential confusion, so we leave them as is. We are also happy to move part of them to the appendix if it is preferred.\n\n- By “helps boost the performance of different existing training algorithms”, we mainly indicate supervised learning and reinforcement learning algorithms that we have evaluated in our paper rather than different models. We have revised our claim in Section 3.2 to make the statement more precise. Also, we reorganize Table 3 to illustrate more clear about what we mean by training algorithms.\n\n- For your question “Do lines 6-7 in Algorithm 2 recurse?”, yes, they do. See line 7 and 10 in Algorithm 1 as well as line 5-6 in Algorithm 2 for the recursive calls. In our evaluated dataset, the programs have a recursion level of up to 5. \n\n- For your questions about ensembling more than 10 models, we extend our evaluation to include up to 15 models in each ensemble, which improves the best performance a little bit. As demonstrated in Figure 3, the majority vote principle always achieves a slightly better generalization accuracy than the shortest principle when at least 9 Exec + RL models are included in the ensemble. Meanwhile, the single model accuracy in the ensemble does not vary much; for example, for Exec + RL models, the mean and standard deviation of a single model accuracy are 85.70% and 0.36% for generalization, and 39.32% and 0.25% for exact match. For random seed selection, we are using the standard pseudo-random number generator (PRNG) in PyTorch, and we didn’t see a clear correlation between the model performance and the random seed selection.\n\n- For your questions about Table 3, we have reorganized Table 3 and included more results for completeness.\n\n- For your questions about “different modules”, we indicate our two proposed approaches: Exec and Ensemble. We have revised the bullet to be “The different modules of our proposed approaches, i.e., execution-guided synthesis and ensemble techniques, ...” to make this point more precise.\n\n- We have revised our paper to address your minor comments.\n", "We highly appreciate your comments and suggestions on improving the presentation of this work! We have incorporated them in our revision, and we respond to your concerns and questions below.\n\nOne of your major concern is about our claim on the novelty of our ensemble approaches. It is true that ensemble is a well-accepted approach in machine learning. However, we find that one neglected piece is the use of available input to justify a model’s output, which is not possible for many machine learning tasks such as machine translation and image recognition. For input-output program synthesis, once a program is generated, we can easily verify whether the prediction could be correct by executing it with the given input-output pairs. In this way, we can easily remove invalid programs from the ensemble, which improves the performance. Albeit its simplicity and effectiveness, to the best of our knowledge, this idea has not been applied in any previous work. Thus, we think this idea deserves to be populated to more audience in the neural program synthesis community. In particular, we propose: (1) verifying the predictions and filtering out those that are inconsistent with input-output specification before ensembling; and (2) the Shortest ensemble principle, and we observe that it achieves a better result than the Majority Vote principle in many cases, especially when the number of models in the ensemble is small. Both ideas leverage unique properties of the program synthesis task, and are not explored in existing work. We also revise our introduction section to further explain why we emphasize our new ensemble approaches.", "Thanks a lot for your encouraging comments! We respond to your questions below:\n\n1. For training set construction, we built a new training set with partial execution information obtained from the original training set, and to make it clearer, we have included the training set construction approach in Section 5.1, with details in Appendix B.\n\n2. For your question about handling else and fi tokens, we do not need any special handling of the else and fi tokens in Algorithm 2, except that the dataset is constructed differently. The details are explained in Appendix B. In particular, the true branch (ending with else) and the false branch (ending with fi) use different IO pairs. The synthesizer trained with such a dataset can learn to generate the correct tokens respectively.\n\n3. For your question about the change to the beam search, no, we are using the same beam search proposed in (Bunel et al. 2018). We agree that a more sophisticated beam search has the potential to further improve the performance, but our main point is that the Exec algorithm can improve over any existing training technique, thus we did not modify the beam search in order to highlight the improvement obtained using the Exec algorithm. We will leave the exploration of different beam search algorithms as an interesting future direction.", "The authors introduce two techniques:\n\nOne is (old school) forward search planning https://en.wikipedia.org/wiki/State_space_planning#Forward_Search\nfor input/output-provided sequential neural program synthesis on imperative Domain Specific Languages with an available partial program interpreter (aka transition function)(from which intermediate internal states can be extracted, e.g. assembly, Python). \nPrevious work did:\n which_instruction, next_neural_state = neural_network(encoding(input_output_pairs), neural_state)\nThis technique:\n which_instruction = neural_network(encoding(current_execution_state_output_pairs))\n next_execution_state = vectorized_transition_function(current_execution_state, which_instruction)\n\nThe second one is ensembles of program synthesizers (only ensembled at test-time). \n\n\nGuiding program synthesis by intermediate execution states is novel, gets good results and can be applied to popular human programming languages like Python.\n\nPros\n+ Using intermediate execution states\nCons\n- State space planning could be done in a learnt tree search fashion, like e.g. Monte Carlo Tree Search\n- Ensembling synthesizers at test time only\n- why not have stochastic program synthesizers, see them as a generative model, and evaluate top-k generalization?\n\nPage 7\nTable 3 line 3: \"exeuction\" -> \"execution\"", "This paper presents two new ideas on leveraging program semantics to improve the current neural program synthesis approaches. The first idea uses execution based semantic information of a partial program to guide the future decoding of the remaining program. The second idea proposes using an ensembling approach to train multiple synthesizers and then select a program based on a majority vote or shortest length criterion. The ideas are evaluated in the context of the Karel synthesis domain, and the evaluation shows a significant improvement of over 13% (from 77% to 90%).\n\nThe idea of using program execution information to guide the program decoding process is quite natural and useful. There has been some recent work on using dynamic program execution in improving neural program repair approaches, but using such information for synthesis is highly non-trivial because of unknown programs and when the DSL has complex control-flow constructs such as if conditionals and while loops. This paper presents an elegant approach to handle conditionals and loops by building up custom decoding algorithms for first partially synthesizing the conditionals and then synthesizing appropriate statement bodies.\n\nThe idea of using ensembles looks relatively straightforward, but it hasn’t been used much in synthesis approaches. The evaluation shows some interesting characteristics of using different selection criterion such as shortest program or majority choice can have some impact on the final synthesized program.\n\nThe evaluation results are quite impressive on the challenging Karel domain. It’s great to see that execution and ensembling ideas lead to practical gains.\n\nThere were a few points that weren’t clear in the paper:\n\n1. Are the synthesis models still trained on original input-output examples like Bunel et al. 2018? Or are the models now trained on new dataset comprising of (partial-inputs-->final-output) pairs obtained from the partial execution algorithm?\n\n2. In algorithm 2, the algorithm generates bodies for if and else branches until generating the else and fi tokens respectively. It seems the two bodies are being generated independently of each other using the standard synthesizer \\Tau. Is there some additional context information provided to the two synthesis calls in lines 8 and 9 so that they know to produce else and fi tokens?\n\n3. Is there any change to the beam search? One can imagine a more sophisticated beam search with semantic information can help as well (e.g. all partial programs that lead to the same intermediate state can be grouped into 1).\n", "Dear Authors,\n\nCongrats on the really positive reviews. As AnonReviewer3 pointed out (\"the recent work on using dynamic program execution in improving neural program repair\"), please consider citing the paper [1] to acknowledge the prior work. Anyway very nice work! Congrats again!\n\n[1] Dynamic Neural Program Embedding for Program Repair" ]
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 7, -1 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 5, -1 ]
[ "SkxQ9icORQ", "SJxSoQpu07", "SyxTLzMzAQ", "SkxQ9icORQ", "B1e46-fMRX", "iclr_2019_H1gfOiAqYm", "iclr_2019_H1gfOiAqYm", "SJlrMs41p7", "SyldTAA6nm", "HkegUbzfCX", "Byg6or5chQ", "Byg4unw53X", "iclr_2019_H1gfOiAqYm", "iclr_2019_H1gfOiAqYm", "iclr_2019_H1gfOiAqYm" ]
iclr_2019_H1goBoR9F7
Dynamic Sparse Graph for Efficient Deep Learning
We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimensionreduction search and obtains the BN compatibility via a double-mask selection. Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.
accepted-poster-papers
This paper proposes a novel approach for network pruning in both training and inference. This paper received a consensus of acceptance. Compared with previous work that focus and model compression on training, this paper saves memory and accelerates both training and inference. It is activation, rather than weight that dominates the training memory. Reviewer1 posed a valid concern about the efficient implementation on GPUs, and authors agreed that practical speedup on GPU is difficult. It'll be great if the authors can give practical insights on how to achieve real speedup in the final draft.
train
[ "ByllM5Etn7", "B1ggCsTnpX", "BJWZDiT2TX", "B1gbAKa267", "B1l_FKTha7", "rJegBoSsnX", "Syeqkc-Dhm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "REVISED: I am fine accepting. The authors did make it a bit easier to read (although it is still very dense). I am also satisfied with related work and comparisons\nSummary: \nThis paper proposes to activate only a small number of neurons during both training and inference time, in order to speed up training and decrease the memory footprint. This works by constructing dynamic sparse graph for each input, which in turn decides which neurons would be used. This happens at each iteration and it does not permanently remove the neurons or weights. To construct this dynamic sparse graph, authors use dimensionality reduction search which estimates the importance of neurons\n\nClarity:\nOverall I found it very hard to follow. Lots of accronyms, the important parts are skipped (the algorithm is in appendix) and it is very dense and a lot of things are covered very shallowly. It would have been better for clarity to describe the algorithm in more details, instead of just one paragraph, and save space by removing other parts. I would not be able to implement the proposed solution by just reading the paper\n\nDetailed comments.\nThis reminds me a lot of a some sort of supervised dropout. \n\nMy main concern, apart from clarity, is that there is no experimental comparison with any other method. How does it compare with other methods of dnn compression or acceleration?\n\nAlso i found the literature review is somewhat lacking. What about methods that induce sparsity via the regularization, or those that use saliency criterion, hessian based approaches like Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. NIPS, 2015. , pruning filters Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for, efficient convnets. ICLR, 2017. etc. \nBasically i don't understand how it compares to alternative methods at all.\n\nQuestions:\nHow does it run during inference? does inference stay deterministic (there is a random projection step there)\n", "Q3: This is an interesting question that can we achieve a joint sparsity pattern across a mini-batch. We think it is possible for simple tasks with only FC-layer based architectures, e.g., through sharing pruning index across different samples [1]. However, the index sharing will lead to more accuracy loss, which forms an issue of accuracy-efficiency trade-off. Furthermore, the CONV layer itself is a GEMM even if without batching samples since each sliding window corresponding to a VMM has independent sparsity pattern. \n\n Sharing sparsity index across sliding windows and samples is equivalent to the weight filter pruning because we can consistently skip the access of some weight columns in Fig. 3(b). Whereas, current filter pruning can only work for inference after complicated training with filter selection and expensive retraining with fixed sparsity pattern. To our best knowledge, we did not see this kind of pruning methods being able to optimize the training phase. \n\n We did additional experiments for training MLP on FASHION dataset and VGG8 on CIFAR10 dataset by sharing the same sparsity pattern across samples (also including sliding windows in CNNs) within each mini-batch according to their selection rate (the weight filter being selected less times across sliding windows and samples will be pruned at current training iteration). We found that the MLP accuracy on FASHION with joint sparsity just decreases 1.2% compared to the vanilla DSG under 50% sparsity; while the CNN with joint sparsity pattern will compromise 16.23% accuracy. This evidences our prediction that the joint sparsity can work on simple tasks, and moreover, CNNs are more difficult to utilize this strategy.\n\nReferences:\n[1] Sun, Xu, Xuancheng Ren, Shuming Ma, and Houfeng Wang. \"meProp: Sparsified back propagation for accelerated deep learning with reduced overfitting.\" arXiv preprint arXiv:1706.06197 (2017).\n\n Q6: The up to 2.3x speedup data can be found in Fig. 7(a) (the last bar). Yes, it is based on VMM implementations and the number of GMACs is for one iteration. Because DSG will not slow down the convergence speed (see the responses to Q5), the operation reduction in each iteration matters for the entire training. \n\n(2)\tQ4: More Networks on Fig. 8b: \n In the revised manuscript, we have added more architectures for Fig. 8(b), including ResNet8 on CIFAR10 and AlexNet on ImageNet. Due to the limited space, we visualized the new results in Fig. 12 (Appendix D). The previous conclusion for Fig. 8(b) still holds for Fig. 12.\n\n(3)\tQ5: Regarding Convergence:\n Usually, sparse training will slow down the overall convergence since the different sparsity pattern at different iterations creates variance. However, we experimentally found that the convergence speed is comparable with the original dense training under our DSG framework. The training curve of both dense and sparse models are shown in Fig. 10(a) and (b) (newly added in Appendix C) for VGG8 on CIFAR10 and ResNet18 on ImageNet, respectively. \n\n To explore the underlying reason, we visualize the inner product distribution before and after dimension reduction through random projection. The newly added Fig. 10(c) presents the pairwise difference between the original high-dimensional inner product and the low-dimensional one. The data are from the CONV5 layer of VGG8. It can be seen that most of the inner product differences are around zero, which implies an accurate approximation capability of the dimension-reduction search. This helps reduce the training variance and avoid training deceleration.\n\n(4)\tQ7: Compared to Dropout: \n In dropout, the important neurons are randomly selected. In contrast, DSG first reduces the data dimension via random projection and then selected important neurons according to the approximated activations in a low-dimensional space with much less computational cost than the original high-dimensional space. The selection of important neurons at each iteration is approximately equivalent to select the neurons with higher activation values, i.e., not fully random.\n\n In the case of low sparsity, both DSG and fully random dropout can achieve good accuracy and even better because of the regularization effect. However, in the case of high sparsity, the fully random selection will significantly compromise the accuracy due to the ablation of too much useful information. In this case, the dimension-reduction search can remain the accuracy much better since it approximately selects neurons according to their importance. Fig. 5(c) in our original submission has shown this phenomenon, wherein the ‘Random’ curve denotes a random dropout. \n\n(5)\tFewer Acronyms:\n In the revised manuscript, we have reduced the number of acronyms including DRS (dimension-reduction search) and DMS (double-mask selection). Note that we still keep DRS in some figures due to the space limitation but with full name in the figure captions.", "Thanks so much for your positive feedback on our methods and constructive comments on the implementation. We provide the answers as below.\n\n(1)\tQ1-3, 6: GEMM Implementation on GPU: \n First, all the authors agree that your comments on the GEMM implementation are quite pertinent, which evidences your correct understanding of our DSG method and your expertise on GPU implementation. In fact, in the text descriptions for Fig. 8(a) in the previous submission, we have mentioned that “DSG generates dynamic vector-wise sparsity, which is not well supported by GEMM,” which is consistent with your comments “it does not lead to the same sparsity pattern in a full minibatch and hence cannot be implemented using GEMM.” \n\n Second, we should note that to achieve practical speedup on GPU is difficult. GPU implementation is based on coarse-grain matrix-matrix multiplications. The problem of our current method is that the vector-wise regular sparsity for VMM will be degraded to irregular sparsity if we aggregate the VMMs for different samples to call GEMM due to the inconsistent sparsity pattern for each VMM. To accelerate irregular sparse GEMM on GPU is a well-known hard problem for the NN acceleration community. \n\n This is the reason why many of prior work cannot achieve practical GPU speedup even if they claimed high compression ratio. This is also the reason why many researches study structured sparsity. However, more structured sparsity will compromise more accuracy. Therefore, there is a trade-off between the regularity and accuracy. Furthermore, most of these methods for structured sparsity are for inference compression which usually makes the training more complicated due to various regularization constraints and iterative fine-tuning/retraining. In contrast, we aim at compressing and accelerating both training and inference.\n\n As the first step, our major goal in this paper is to validate the functionality of the proposed DSG method based on dimension reduction. Although our sparsity pattern is not well structured for GEMM, it is still much more regular (vector-wise for each VMM) than the fully irregular sparsity. At this stage, we implement DSG based on multi-thread VMMs. It can do much better than dense VMM while can only outperform GEMM in the case of quite high sparsity (see Fig. 8(a)). Your question of how to further optimize GEMM under the DSG framework is truly interesting and deserves our future investigation. \n\n Third, although current DSG is not very compatible with GEMM on GPUs, we still have great potential for implementing on specialized hardware. The vector-wise sparsity pattern of DSG is quite regular for VMM. Moreover, many specialized NN accelerators are based on VMM operations that are suitable for using our DSG method. With customized hardware design, it is easy to achieve high speedup (since even if the sparsity is fully irregular, the speedup is still significant, see S. Han et al. 2016). Compared with prior accelerators, we have much higher potential for accelerating each VMM (i.e., Y=WX+b) since in DSG both the X from previous layer and the Y in current layer are sparse and the skip of W access is structured (skip whole rows or columns, see Fig. 3(b) and Fig. 4).\n\n Specifically, the answers to Q1-3 are listed as follows: \n\n Q1: The primary target of this work is to validate the functionality of DSG, and we only implement it using multi-thread VMMs on CPU at this stage. Detailed reasons can be found in the above explanations. We have clarified this point in the Experiment Setup section of the previous submission. \n\n Q2: The dense baseline in Fig. 8(b) use multi-thread VMMs on CPU.\n\n\nTo be continued in Part II...", "Thanks for your valuable suggestions and comments. Details can be found as below.\n\n(1)\tPresentation Clarity: \nAccording to your kind suggestions, we revised the manuscript as follows:\na)\tReduced the number of acronyms including DRS (dimension-reduction search) and DMS (double-mask selection). Note that we still keep DRS in some figures due to the space limitation but with full name in the figure captions.\nb)\tPolished the paper to make it clearer and more in-depth, especially moved Algorithm 1 to the main text.\nc)\tAdjusted the spacing to make it less dense.\n\n(2)\tComparison with Other Methods:\nAs mentioned by Reviewer #1, our work targets at both training and inference, while most of the previous work focused on the inference. In prior methods, training usually becomes more complicated with various regularization constraints or iterative fine-tuning/retraining. Therefore, it is not very fair to compare with them, which is the reason we did not include the comparison in the original submission.\n\nThis time, after considering your advice, we added additional comparisons with several existing compression methods. We just compared the inference pruning, but we should note that our major contribution is on the training side. The results are shown in Table 2 in the newly added section of Appendix D. Because the focus is on comparing with inference compression approaches, we perform DSG based on pre-trained models rather than training from scratch like most experiments in the first submission. Table 2 demonstrates that DSG can achieve a good balance between the operation amount and model accuracy.\n\n(3)\tLiterature Review:\nThanks for pointing out the importance of literature review. However, we are sure that the literature you mentioned in the comments have been cited in our first submission (Introduction and Related Work sections). For example, S. Han et al. “Learning both weights and connections for efficient neural network” (NIPS 2015), H. Li et al. “Pruning filters for efficient convnets” (ICLR 2017), regularization based pruning (e.g. W. Wen et al. 2016, Y. He et al. 2017, Z. Liu et al. 2017), saliency criterion based pruning (e.g. S. Han et al. 2015a/b, P. Molchanov et al. 2016, H. Li et al. 2016, Y. Lin et al. 2017a/b, X. Sun et al. 2017, Y. He et al. 2018a), and other optimization based methods (e.g. J. H. Luo et al. 2017, L. Liang et al. 2018) have already been cited in the previous manuscript. \n\nAfter considering your feedback, we have cited more emerging references (e.g., T. W. Chin et al. 2018, J. Ye et al. 2018, J. H. Luo et al. 2018, Hu et al. 2018, Y. He et al. 2018b) in the revised manuscript.\n\n(4)\tDoes Inference Stay Deterministic?\nThe project matrices are fixed after a random initialization at the beginning of training. Therefore, inference stays deterministic. We have made this point clear in the revised manuscript (Experiment Setup section).", "We appreciate you for the positive feedback and your recognition of our contribution on compressing and accelerating both training and inference via dimension reduction search. Your questions are truly insightful which merit further investigation. Our answers are listed as follows:\n\n(1)\tQ1: Layer-wise Sparsity Configuration:\nYes, we set a uniform sparsity across all layers for simplicity in our experiments because training DNNs is costly. After reading your comments, we find it is fascinating to touch more. To explore the sparsity configuration strategy for different layers is possible to produce better accuracy at the same compression level. However, entirely exploring the configuration space is very time consuming and can be enough to start a new story. Considering the limited time budget for rebuttal, we just did a supplementary experiment with a heuristic strategy, i.e., higher sparsity for compute-intensive layers and lower sparsity for the rest layers. \n\nTaking CONV2-CONV6 layers in VGG8 on CIFAR10 as a case study (since other layers occupy fewer operations), the sparsity configuration of 0.5-0.9-0.9-0.9-0.9 achieves 92.09% accuracy, while the configuration of 0.9-0.5-0.9-0.5-0.9 using the mentioned heuristic strategy could reach 92.89%. Both of two configurations have nearly the same operations but different accuracy results, which reflects that sparsity configuration across layers indeed matters.\nIn fact, recent works [1, 2] touched this problem by using fast sensitivity test or reinforcement learning, respectively, to automate the layer-wise sparsity configuration. Their results evidence your guess that a smarter pruning strategy probably gives better accuracy. From Fig. 2 in [1] and Fig. 2-3 in [2], we can see that the optimized sparsity configuration presents a non-uniform distribution across layers. The interesting point is that the distribution is not monotonous but presents fluctuation as layer changes. Although it seems quite hard to reveal the underlying reason, we believe your suggestion is a right direction for future work. We have added reference [1] in the revised Related Work section.\n\n(2)\tQ2-3: Selection Pattern Evolvement:\nOur previous submission focused more on the method validation of using the dimension-reduction search to select critical neurons for achieving sparse computational graph dynamically. Your comments indeed pose another interesting question: how does the activation selection evolve and can it converge well? We are also curious about the answer. \n\nTo explore this question, we did an additional experiment as shown in the Appendix C Fig. 11 in the revised manuscript. We select a mini-batch of training samples as a case study for data recording. Each curve presents the results of one layer (CONV2-CONV6). For each sample at each layer, we recorded the change of binary selection mask between two adjacent training epochs. Here the change is obtained by calculating the L1-norm value of the difference tensor of two mask tensors at two adjacent epochs, i.e., change=batch_avg_L1norm(mask[i+1] – mask[i]). Here the “batch_avg_L1norm” indicates the average L1-norm value across all samples in one mini-batch. As shown in Fig. 11(a), the selection mask for each sample converges as training goes on.\n\nActually, in our implementation, we inherit the random projection matrix from training and do the same dimension-reduction search in inference. We didn’t try to suspend the selection masks directly. Our concern is that the selection mask varies across samples even if we observed convergence for each sample. As we can see from Fig. 11(b), the different mask tensors between adjacent samples in one mini-batch present significant differences (large L1-norm value) after training. Therefore, it will consume a lot of memory space to save these trained masks for all samples, which is less efficient than conducting on-the-fly search during inference. Index sharing across different samples [3] might be helpful at the cost of more accuracy degradation. We agree that your feedback on the selection convergence is an exciting problem, and we are pleased to study more in the future.\n \nReferences:\n[1] He, Yihui, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. \"AMC: AutoML for model compression and acceleration on mobile devices.\" In Proceedings of the European Conference on Computer Vision (ECCV), pp. 784-800. 2018.\n[2] Xu, Xiaofan, Mi Sun Park, and Cormac Brick. \"Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices.\" arXiv preprint arXiv:1811.00482 (2018).\n[3] Sun, Xu, Xuancheng Ren, Shuming Ma, and Houfeng Wang. \"meProp: Sparsified back propagation for accelerated deep learning with reduced overfitting.\" arXiv preprint arXiv:1706.06197 (2017).", "[Overview]\n\nIn this paper, the authors proposed to use dynamic sparse computation graph for reducing the computation memory and time cost in deep neural network (DNN). This method is applicable in both DNN training and inference. Unlike most of previous work that focusing on the reduction of computation during the inference time, this new method propose a dynamic computation graph by pruning the activations on the fly during the training of inference, which is an interesting and novel exploration. In the experiments, the authors performed extensive experiments to demonstrate the effectiveness of the proposed method compared with several baseline methods and original models. It is clear to me that this method helps to reduce the memory cost and computation cost for both DNN training and inference.\n\n\n[Strengthes]\n\n1. This paper addresses the computational burden in both memory and time from a novel angle than previous network pruning methods. It can be applied to reduce the computation in both network training and inference, but also preserve the representation ability of the network.\n\n2. To endow the network compression in training and inference, the authors proposed to mute the low-activated neurons so that the computations merely happened on those selected neurons. \n\n3. For the selection, the authors proposed a simple but efficient dimension reduction methods, random sparse projection, to project the original activations and weights into a lower-dimensional space and compute the approximated response map in such a lower dimension space, which the selection is based on.\n\n4. The authors performed comprehensive experiments to demonstrate the effectiveness the proposed method for network compression. Those results are insightful and solid.\n\n[Questions]\n\n1. Is the sparsity of each layer the same across the whole network? It would be nice if the authors could perform some ablation studies on varied sparsity in different layers, maybe just with some heuristic methods, e.g., decreasing the sparsity from lower layer to upper layers. As the authors mentioned, higher sparsity causes a larger degradation on deeper network. I am curious that whether there are some better way to set the sparsity.\n\n2. During the training of the network, how the activation evolve? It would be interesting to show how the selected activation changes across the training time for the same training sample. This might provide some insights on when the activations begin to converge to a stable state, and how it varies layer by layer. \n\n3. Following the above questions, is there any stage that the sparsity can be fixed without further computation for selection. In generally, the training proceeds for a number of epochs. It would be nice if we can observe some convergence on the selected activations and then we can suspend the selection for saving the computation burden.\n\n[Conclusion]\n\nThis paper present an interesting and novel approach for network pruning in both training and inference. Unlike most of the previous work, it pruning the activations in each layer though a dimension reduction strategy. From the experiments, this method achieved an obvious improvement for reducing the computation memory and time cost in training and inference stages. I think this paper has prompted a new direction of efficient deep neural network.\n", "This manuscript introduces a computational method to speed up training and inference in deep neural networks: the method is based on dynamic pruning of the compute graph at each iteration of the SGD to approximate computations with a sparse graph. To select which neurons can be zeros and ignored at a given iteration, the approach computes approximate activations using random projections. The approach gives an overall decrease in run-time of 0.8 to 0.6. I believe that its largest drawback is that it does not lead to the same sparsity pattern in a full minibatch, and hence cannot be implemented using matrix-matrix multiplications (GEMM). As a result, the compute-time speed ups are not huge, though the decrease in memory is important. In my eyes, this is the largest drawback of the manuscript: the total computational speed-up demonstrated is not fully convincing.\n\nThe manuscript is overall well written and easy to understand, though I wish that the authors employed less acronyms which forced me to scan back as I kept forgetting what they mean.\n\nThe strength of the paper are that the solution proposed (dynamic approximation) is original and sensible. The limitations are that I am not sure that it can give significant speedups because I it is probably hard to implement to use well the hardware.\n\nQuestions and comments:\n\n1. Can the strategy contributed be implemented efficiently on GPUs? It would have been nice to have access to some code.\n\n2. Fig 8(b) is the most important figure, as it gives the overall convergence time. Is the \"dense baseline\" using matrix-vector operations (VMM) or mini-batched matrix-matrix operation (GEMM)?\n\n3. Can the method be adapted to chose a joint sparsity across a mini-batch? This would probably mean worst approximation properties but would enable the use of matrix-matrix operations.\n\n4. It is disappointing that figure 8 is only on VGG8, rather than across multiple architectures.\n\n5. The strategy of zeroing inputs of layers can easily create variance that slows down overall convergence (see Mensh TSP 2018 for an analysis of such scenario). In stochastic optimization, there a various techniques to recover fast convergence. Do the authors think that such scenario is at play here, and that similar variance-reduction methods could bring benefits?\n\n6. I could not find what results backed the numbers in the conclusion: 2.3 speed up for training. Is this compared to VMM implementations? In which case it is not a good baseline. Is this for one iteration? In which case, it is not what matters at the end.\n\n7. Is there a link between drop-out and the contributed method, for instance if the sparsity was chosen fully random? Can the contributed method have a regularizing effect?\n\n" ]
[ 7, -1, -1, -1, -1, 8, 7 ]
[ 2, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2019_H1goBoR9F7", "BJWZDiT2TX", "Syeqkc-Dhm", "ByllM5Etn7", "rJegBoSsnX", "iclr_2019_H1goBoR9F7", "iclr_2019_H1goBoR9F7" ]
iclr_2019_H1gsz30cKX
Fixup Initialization: Residual Learning Without Normalization
Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.
accepted-poster-papers
The paper explores the effect of normalization and initialization in residual networks, motivated by the need to avoid exploding and vanishing activations and gradients. Based on some theoretical analysis of stepsizes in SGD, the authors propose a sensible but effective way of initializing a network that greatly increases training stability. In a nutshell, the method comes down to initializing the residual layers such that a single step of SGD results in a change in activations that is invariant to the depth of the network. The experiments in the paper provide supporting evidence for the benefits; the authors were able to train networks of up to 10,000 layers deep. The experiments have sufficient depth to support the claims. Overall, the method seems to be a simple but effective technique for learning very deep residual networks. While some aspects of the network have been used in earlier work, such as initializing residual branches to output zeros, these earlier methods lacked the rescaling aspect, which seems crucial to the performance of this network. The reviewers agree that the papers provides interesting ideas and significant theoretical and empirical contributions. The main concerns by the reviewers were addressed by the author responses. The AC finds that the remaining concerns raised by the reviewers are minor and insufficient for rejection of the paper.
val
[ "HJlQe0YjA7", "rJlrkyKsRm", "Skgn80Is0X", "r1gl9tKq07", "BJl7K-FcCm", "r1gTIYutnX", "rJxL-c1V0m", "BJe83JIMC7", "SygoSH1b0m", "HJxywPsqaQ", "BJxPwIScaX", "B1lgruiDa7", "HJgkrvsDpm", "rkgrdHowTm", "BJeS4HjvpX", "r1xnufswaX", "B1xtAbjwaX", "SJlbdxjva7", "H1e_BZjmpQ", "H1eV6OUGpQ", "SyeHla6K37", "Skl714KOnX" ]
[ "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments!\n\nOne of the authors here. I think you raised interesting questions in the first part, but am not sure what you mean exactly there. Am I correct that you would like to:\n(1) see the result of a standard ResNet (i.e. with batch normalization layers) if we initialize the last gamma in each residual branch as 0;\nand (2) know if we can (or why we cannot) train a residual network with standard initialization and no normalization by setting eta as 1/L?\n\nRegarding the second part, indeed Yang & Schoenholz (2017) provide a more detailed characterization of the gradient norms and other quantities, which we very much appreciate. By \"generality\" we mean our analysis in Section 2 applies to different weight initialization schemes (e.g. not necessarily i.i.d.; can even be data-dependent) except for the i.i.d. assumption on the last fully-connected layer, whereas previous work typically assumes some particular initialization scheme (e.g. Yang & Schoenholz (2017) studied i.i.d. Gaussian weight initialization).\n\nOn the other hand, our result in Section 2 does have limitations compared with Yang & Schoenholz (2017), in that it is a lower bound of gradient norm for certain layers. While it explains why gradient explosion happens in standard initialization, it does not tell us when gradient explosion is guaranteed to NOT happen, which is addressed in Yang & Schoenholz (2017) (though with additional assumptions).\n\nThat said, the main message we hope to convey (in Section 3 and Appendix B) is that when studying multi-layer neural networks, it may be more important to think about the scale of function update than the scale of gradients (though of course they are related). Similar analysis for multi-layer linear networks is present in e.g. (Arora et al., 2018); and the study of maximal stable learning rate in (Saxe et al., 2013) may be another related finding. We believe this is a good way to study the optimization of deep neural networks.\n\nArora, S., Cohen, N., & Hazan, E. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509.\nSaxe, A. M., McClelland, J. L., & Ganguli, S. (2013). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.", "We agree. We will place this bias module inside the residual branch in the next revision.\n\nAlso thank you for noting this detail -- should definitely be corrected :)", "Agreed, the correspondence is clearer when the bias is drawn in the residual branch instead of after the +. I saw that you just revised the manuscript, but you could consider making this change as well (since there is no real reason to draw it after the + instead of in the residual branch).\n\nAlso, as a minor comment, the \"√L\" in the diagram (\"scaled down by √L\") is a different color and font than the \"scaled down by.\"", "Dear AC and anonymous reviewers,\n\nThanks for your helpful comments and suggestions! We have significantly revised the justification text of our method based on your feedback. While our method, experiments and existing analysis remain valid, we have added new results that we believe are worth noting:\n\n(1) We provide a top-down analysis for motivating the proposed method (see Section 3). We make efforts to rewrite Section 3 and believe now we have convincing justifications to explain our empirical success.\n(2) To support (1), we derive two new theorems (see Appendix B) which we believe shed new lights on the understanding of neural network training.\n(3) We add an ablation study section (see Appendix C.1) to show each part of the proposed method play a role in the overall performance.\n(4) We rewrite the related work section based on the feedback we get since the original submission. In particular, we (i) explain the difference between ZeroInit and normalization methods, (ii) compare our analysis in Section 2 with previous theoretical work, and (iii) compare our proposed method with previous ResNet initialization in practice.\n(5) Empirical results on Transformer are slightly improved (see Table 3). We also include ResNet-101 results on ImageNet (see Table 2).\n\nThanks again for your attention! We are happy to take any questions.", "Hi, thanks for your response.\n\nRegarding gamma=0 BN networks, I agree there is some theoretical motivation for your method compared to the Goyal et al. method. However, I would still be very curious to see the result of comparing to gamma=0 BN networks empirically, i.e repeat your suite of tests with the standard resnet but just initialize BN gamma = 0. Also, if your analysis is correct, that there can be problems if eta and L are both large, then why can't one just scale eta as 1/L, at least initially in training?\n\nRegarding your comments on Yang & Schoenholz (2017): Correct me if I'm wrong, but the \"Axiom 3.1\" of that paper seems only assumed for nice presentation. \"Axiom 3.2\" (gradient independence) indeed seems unreasonable a priori, but as demonstrated in many papers by now (Schoenholz et al. 2017, Xiao et al. 2018, Karakida et al. 2018, Amari et al. 2018, and so on), this assumption leads to highly accurate predictions of gradient norms and other quantities. So while I agree you do not assume certain things in your paper, you also do not get prediction for the mean gradient norms and other quantities that can be verified. Thus claiming \"generality\" in this scenario seems misleading. In terms of measuring and correcting for gradient explosion, for example, I would think it's much better to get mean predictions of gradient norms rather than bounds which could be vacuous.\n\nSchoenholz, Gilmer, Ganguli, Sohl-Dickstein 2017. Deep Information Propagation\nXiao, Bahri, Sohl-Dickstein, Schoenholz, Pennington 2018. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks\nKarakida, Akaho, Amari 2018. Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach\nAmari, Karakida, Oizumi 2018. Fisher Information and Natural Gradient Learning of Random Deep Networks", "Summary: \nA method is presented for initialization and normalization of deep residual networks. The method is based on interesting observations regarding forward and backward explosion in such networks with the standard Xavier or (He, 2015) initializations. Experiments with the new method show that it is able to learn with very deep networks, and that its performance is on a par with the best results obtained by other networks with more explicit normalization.\nAdvantages:\n-\tThe paper includes interesting observations, resulting in two theorems, which show the sensitivity of traditional initializations in residual networks\n-\tThe method presented seems to work comparable to other state of the art initialization + normalization methods, providing overall strong empirical results. \nDisadvantages:\n-\tThe authors claim to suggest a method without normalization, but the claim is misleading: the network has additive and multiplicative normalization nodes, and their function and placement is at least as ‘mysterious’ as the role of normalization in methods like batch and layer normalization.\no\tThis significantly limits the novelty of the method: it is not ‘an intialization’ method, but a combination of initialization and normalization, which differ from previous ones in some details. \n-\tThe method includes 3 components, of which only one is justified in a principled manner. The other components are not justified neither by an argument, nor by experiments. Without such experiments, it is not clear what actually works in this method, and what is not important.\n-\tThe argument for the ‘justified’ component is not entirely clear to me. The main gist is fine, but important details are not explained so I could not get the entire argument step-by-step. This may be a clarity problem, or maybe indicate deeper problem of arbitrary decisions made without justification – I am not entirely sure. Such lack of clear argumentation occurs in several places\n-\tExperiments isolating the contribution of the method with respect to traditional initializations are missing (for example: experiments on Cifar10 and SVHN showing the result of traditional initializations with all the bells and whistles (cutout, mixup) as the zeroInit gets.\n\nMore detailed comments:\nPage 3:\n-\tWhile I could follow the general argument before eq. 2, leading to the conclusion that the initial variance in a resnet explodes exponentially, I could not understand eq. 2. What is its justification and how is it related to the discussion before it? I think it requires some argumentation.\nPage 4:\n-\tI did not understand example 2) for a p.h. set. I think an argument, reminder of the details of resnet, or a figure are required.\n-\tI could not follow the details of the argument leading to the zeroInit method:\no\tHow is the second design principle “Var[F_l(x_l)] = O( 1/L) justified?\nAs far as I can see, having Var[F_l(x_l)] = 1/L will lead to output variance of (1+1/L)^L =~e, which is indeed O(1). Is this the argument? Is yes, why wasn’t it stated? Also: why not smaller than O(1/L)?\no\tFollowing this design principle several unclear sentences are stated:\n\tWe strive to make Var[F_l(x_l)] = 1/L, yet we set the last convolutional layer in the branch to 0 weights. Does not it set Var[F_l(x_l)] = 0, in contradiction to the 1/L requirement?\n\t “Assuming the error signal passing to the branch is O(1),” – what does the term “error signal” refer to? How is it defined? Do you refer to the branch’s input?\n\tI understand why the input to the m-th layer in the branch is O(\\Lambda^m-1) if the branch input is O(1) but why is it claimed that “the overall scaling of the residual branch after update is O(\\lambda^(2m-2))”? what is ‘the overall scaling after update’ (definition) and why is it the square of forward scaling?\n-\tThe zero Init procedure step 3 is not justified by any argument in the proceeding discussion. Is there any reason for this policy? Or was it found by trial and error and is currently unjustified theoretically (justified empirically instead). This issue should be clearly elaborated in the text. Note that the addition of trainable additive and multiplicative elements is inserting the normalization back, while it was claimed to be eliminated. If I understand correctly, the ‘zeroInit’ method is hence not based on initialization (or at least: not only on initialization), but on another form of normalization, which is not more justified than its competitors (in fact it is even more mysterious: what should we need an additive bias before every element in the network?)\nPage 5:\n-\tWhat is \\sqrt(1/2) scaling? It should be defined or given a reference.\nPage 6:\n-\tIt is not stated on what data set figure 2 was generated.\n-\tIn table 2, for Cifar-10 the comparison between Xavier init and zeroInit shows only a small advantage for the latter. For SVHN such an experiment is completely missing, and should be added.\no\tIt raises the suspect the the good results obtained with zeroInit in this table are only due to the CutOut and mixup used, that is: maybe such results could be obtained with CutOut+Mixup without zero init, using plain Xavier init? experiments clarifying this point are also missing.\nAdditional missing experiments:\n-\tIt seems that ZeroInit includes 3 ingredients (according to the box in page 4), among which only one (number 2) is roughly justified from the discussion. Step 1) of zeroing the last layer in each branch is not justified –why are we zeroing the last layer and not the first, for example? Step 3 is not even discussed in the text – it appear without any argumentation. For such steps, empirical evidence should be brought, and experiments doing this are missing. Specifically experiments of interest are:\no\tUsing zero init without its step 3: does it work? The theory says it should.\no\tUsing only step 3 without steps 1,2. Maybe only the normalization is doing the magic?\nThe paper is longer than 8 pages.\n\nI have read the rebuttal.\nRegarding normalization: I think that there are at least two reasonable meanings to the word 'normalziation': in the wider sense is just means mechanism for reducing a global constant (additive normalization) and dividing by a global constant (multiplicative normalization). In this sense the constant parameters can be learnt in any way. In the narrow sense the constants have to be statistics of the data. I agree with the authors that their method is not normalization in sense 2, only in sense 1. Note that keeping the normalization in sense 1 is not trivial (why do we need these normalization operations? at least for the multiplicative ones, the network has the same expressive power without them). I think the meaning of normalization should be clearly explained in the claim for 'no normalization'.\nRegarding additional mathematical and empirical justifications required: I think such justifications are missing in the current paper version and are not minor or easy to add. I believe the work should be re-judged after re-submission of a version addressing the problems.", "Many thanks for the rebuttal. After reading this and the other reviews, I'd be inclined to keep my score to \"accept\". ", "Thanks for asking! Yes, we will release the code after the review period.", "Will you release the code for this paper? This would be helpful for reproducibility.", "Thanks for asking!\n\nIt may appear as we are doing a reordering, but in fact the right of Figure 1 makes two changes to the middle of Figure 1:\n\n(1) Deleting extra multiplier(s) so that there is only one multiplier per residual branch. This is because the effect of two (or more) multipliers is similar to that of one multiplier, which is to influence the effective learning rate of the conv layers in the same branch.\n\n(2) Adding a bias before each conv layer (i.e. changing ReLU-Conv to ReLU-Bias-Conv). The intuitive justification is that the preferred input mean of the conv layer may be different from the preferred output mean of the ReLU, hence a bias parameter allows for more representation power to satisfy both preferences. This is similar to why a bias term is added before ReLU (e.g. in standard feed-forward networks, Conv-BN-ReLU module, as well as our Conv-Bias-ReLU module).\n\nFor additional justifications of (2), also note that there are debates about whether Conv-BN-ReLU or Conv-ReLU-BN is better in practice [1]; on the other hand, in [2, Figure 6 (d)] the authors find the best-performing residual branch to be \"BN-Conv-BN-ReLU-Conv-BN\". It may appear that the conclusion to draw from [2] is that one should use \"more batchnorm and less relu [3]\". However, if we remove the normalization layers in \"BN-Conv-BN-ReLU-Conv-BN\" and delete extra multipliers as per (1), we are left with:\n\"Bias-Conv-Bias-ReLU-Conv-Multiplier-Bias\", \nwhich is indeed very similar to what we proposed in the right of Figure 1:\n\"Bias-Conv-Bias-ReLU-Bias-Conv-Multiplier-Bias\".\n\n------------------\nA side remark: when comparing middle and right of Figure 1, it may be helpful to switch the \"bias\" after the \"+\" into the residual branch, i.e. after the \"multiplier\", as the correspondence is easier to see this way and these two computation graphs are mathematically equivalent.\n\n------------------\nReferences:\n[1] Batch Normalization before or after ReLU?https://www.reddit.com/r/MachineLearning/comments/67gonq/d_batch_normalization_before_or_after_relu/\n[2] Han, D., Kim, J., & Kim, J. (2017). Deep pyramidal residual networks. CVPR.\n[3] Andrej Karpathy. https://twitter.com/karpathy/status/827644920143818753?lang=en", "Why were the biases and multipliers re-ordered, and one multiplier replaced with a bias (as in Figure 1)? The use of the architecture on the right of Figure 1 has still has not been justified over the (seemingly more natural) architecture in the middle of Figure 1.", "Hi, thanks for your interest and pointer to related work! We believe that both our method and the theoretic analysis contain substantial novelty. \n\nA comparison with the gamma=0 alternative:\n\nFor the batchnorm implementation, as the other comment pointed out, the suggestion of setting gamma=0 in the last batchnorm dates back at least to (Goyal et al., 2017). We agree that it is a great observation. However, setting gamma=0 for the last batchnorm is not sufficient for training without using a normalization method. As we explain in the paper, only setting the residuals to zero, the Step 1 of our method, will still result in explosion after a few steps. This is why our method requires Step 2 to lead to reliable convergence in all cases we tested.\n\nWe summarize some key differences in the following, and also provide a detailed account of why the alternative method of setting gamma=0 would not work. For further information, please also refer to our reply to AnonReviewer1.\n\nThe critical insight for our design is that, we would like to ensure the norm of the update to each residual branch function to be O(eta/L) per each step where eta is the maximal learning rate and L is the number of residual branches, hence ensuring the logits do not blow up after O(1/eta) steps. As we show in the updated version, a scalar ResNet model may help understand the argument.\n\nStep 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work.\n\nFor example, suppose the affine layers in batchnorm is preserved while the normalization layers are removed, and suppose we set gamma=0 in the last affine layer of each residual branch. What will happen in the first SGD update? By chain rule and Kaiming initialization, one can show that the gamma(s) in the last affine layer of each residual branch will get an update of O(eta), whereas the other layers in the residual branch get no updates. It then follows that each residual branch is a function of scale O(eta) after the first SGD update. Furthermore, we can show that all the residual branches are highly correlated after one update, resulting in output logits of O(1 + eta*L) scale, which leads to gradient explosion if L is large and eta is not small, as shown in our analysis.\n\nA comparison with related theoretic work:\n\nFirst, we thank you for bringing (Yang & Schoenholz 2017) to our attention. We appreciate the depth and mathematical skills demonstrated in both works, and agree that our analysis does not apply to arbitrary activation functions. That said, we would like to emphasize that our analysis excels in three aspects when compared with related work: general, realistic and simple. We now explain below:\n\nGenerality:\n\nWe only make two assumptions: (1) positive homogeneity and (2) weight distribution of the fully-connected layer. No other assumptions about the network structure is made (in particular, our analysis applies to (i) both the basic residual block and the bottleneck residual block; (ii) both the original version and the pre-activation version). No assumption about the distribution of other weights is made (in particular, our analysis applies to orthogonal initialization as well as data-dependent initialization).\n\nIn contrast, Yang & Schoenholz (2017) only analyzed what they called the \"reduced residual network\" and the \"full residual network\", both of which only contains one activation function per each residual branch, hence does not apply to the usual 2-layer block or the bottleneck structure. Their analysis also requires both (Axiom 3.1) symmetry of activation and gradients and (Axiom 3.2) gradient independence. Finally, their analysis does not include convolutional layers, which are a crucial element of practical networks.\n\nReality:\n\nWith our general and mild assumptions, our analysis directly applies to the models and algorithms people implement.\n\nIn contrast, the gradient independence assumption (Axiom 3.2) in (Yang & Schoenholz 2017) requires the forward and backward process to be fully decoupled, which is not the case for the networks that are used in practice.\n\nSimplicity:\n\nIn addition to applying to real-world networks, our proof technique is simple and only involves basic probability and calculus. Our proof length is less than one page. In contrast, the proofs in (Yang & Schoenholz 2017) involve intricate algebraic manipulations and advanced math topics such as the mean field theory, and often span multiple pages.\n\nWe would like to note that by all means we sincerely respect the works of (Yang & Schoenholz 2017) and (Hanin & Rolnick 2018), and will discuss their contributions in the revised paper. On the other hand, we also believe that simple and general theories such as our analysis are good things to have and to build upon.", "Hi, thanks for your interest and pointer to related work! Goyal et al. (2017) made a great observation, however setting gamma=0 for the last batchnorm is not sufficient for training without using a normalization method. As we explain in the paper, only setting the residuals to zero, the Step 1 of our method, will still result in explosion after a few steps. This is why our method requires Step 2 to lead to reliable convergence in all cases we tested.\n\nWe summarize some key differences in the following, and also provide a detailed account of why the alternative method of setting gamma=0 would not work. For detailed justifications about Step 1 & 2, please refer to our \"general reply (2)\" to AnonReviewer1.\n\nThe critical insight for our design is that, we would like to ensure the norm of the update to each residual branch function to be O(eta/L) per each step where eta is the maximal learning rate and L is the number of residual branches, hence ensuring the logits do not blow up after O(1/eta) steps. As we show in the updated version, a scalar ResNet model may help understand the argument.\n\nStep 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work.\n\nWhy simply setting gamma=0 does not work:\n\nSuppose the affine layers in batchnorm is preserved while the normalization layers are removed, and suppose we set gamma=0 in the last affine layer of each residual branch. What will happen in the first SGD update? By chain rule and Kaiming initialization, one can show that the gamma(s) in the last affine layer of each residual branch will get an update of O(eta), whereas the other layers in the residual branch get no updates. It then follows that each residual branch is a function of scale O(eta) after the first SGD update. Furthermore, we can show that all the residual branches are highly correlated after one update, resulting in output logits of O(1 + eta*L) scale, which leads to gradient explosion if L is large and eta is not small, as shown in our analysis.", "Dear AnonReviewer3, thank you for your encouraging review. We totally agree with your comments.\n\nA side note to your question: our experiments show that with standard data augmentation, the regularization effect of batch normalization can bring about 0.5% improvement in test accuracy on CIFAR-10, but we hypothesize some advanced regularization methods (such as ShakeDrop or DropBlock) could also make up for this gap.\n\n- References:\n[1] Yamada, Y., Iwamura, M., & Kise, K. (2018). ShakeDrop regularization. arXiv preprint arXiv:1802.02375.\n[2] Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). DropBlock: A regularization method for convolutional networks. arXiv preprint arXiv:1810.12890.", "Dear AnonReviewer2, we appreciate your encouraging review and valuable suggestions. We hope to address your questions below:\n\n1. The reviewer hopes to know if \"previous contributions from the literature\" have similar concepts. \n\nWe listed related work we knew of by the time of paper submission. After submission, we did find more related work. Indeed, some previous works propose to initialize the residual branches in a way such that the network output variance is independent of depth, which is a necessary but not sufficient condition for training very deep residual networks, as we show in the updated version.\n\nHowever, none of the related work observes that the residual branches should be initialized in a way such that its update is O(eta/L) per SGD step, where eta is the maximal global learning rate and L is the total number of residual branches. This ensures the network has an update of O(eta) per SGD step, which we find is a sufficient condition for training to proceed as fast as batch normalization.\n\n2. The reviewer has not found a \"convincing argument against the use of batch normalization\".\n\nEven if a practitioner continues to use batch normalization, we argue that this work helps understand how BatchNorm improves training.\n\nAnd for several tasks, batch normalization is not applicable or at least no preferable. Our method holds promise in many of these different tasks. For example, batch normalization is not used in many natural language tasks, where the state-of-the-art models use layer normalization (Vaswani et al., 2017), whereas we show our method can match or supercede its performance. In image super-resolution, it is recently shown that training without batch normalization improves performance (Lim et al., 2017); our method could possibly help achieve further improvement. In image style transfer, instance normalization is currently the standard technique (Ulyanov et al., 2016; Zhu et al., 2017); our method could possibly help as well. In semantic segmentation task, although batch normalization is found useful, its batchsize requirement put a severe constraint on the model size and the parallelizability of training, resulting in heavy burden of cross-GPU communication (Peng et al., 2017); hence using ZeroInit in combination with other regularization may be preferable. In image classification problems, current evidences are still in favor of batch normalization; however, as our method removes the necessity of using batch normalization in training and exposes the severe overfitting problem, future exploration of regularization methods that supersede batch normalization is possible. \n\nReferences:\n[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).\n[2] Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. (2017, July). Enhanced deep residual networks for single image super-resolution. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops (Vol. 1, No. 2, p. 4).\n[3] Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. (2016). Instance Normalization: The Missing Ingredient for Fast Stylization\n[4] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint.\n[5] Peng, C., Xiao, T., Li, Z., Jiang, Y., Zhang, X., Jia, K., ... & Sun, J. (2017). Megdet: A large mini-batch object detector. arXiv preprint arXiv:1711.07240, 7.", "Page 3:\nEq. 2 is essentially restating the reasoning and conclusion before it in a mathematical way. It can be derived by calculating the variance of both the LHS and RHS of Eq. 1 and applying the independence assumption. The second equality can be shown by mathematical induction. We will clarify in the updated version.\n\nPage 4:\n- Thanks, we will add a figure to clarify each p.h. set example.\n- Yes, the fact \"(1+1/L)^L =~e\" is exactly why we would like the update of each residual branch rather than Var[F_l(x_l)] to be O(eta/L). Thanks for asking, we will correct in the updated version.\n- By \"error signal\" we mean the partial derivative of the loss function w.r.t. a layer. This term is used in e.g. (Schraudolph, 1998) but we now realize it is not clear. We will clarify its meaning.\n- Thanks for asking -- this is central to understanding our method. Please refer to our new analysis in justifying Step 1 & 2 above.\n- Please see the above general reply for justifications of step 3. Once again, we emphasize that our method is an initialization with minimal network components for achieving state-of-the-art performance, and contains no normalization operation.\n\nPage 5:\n- \"\\sqrt(1/2) scaling\" is rescaling the activations by \\sqrt(1/2) after each block. It is proposed as a possible remedy for ResNet without batch normalization in (Balduzzi et al., 2017).\n\nPage 6:\n- The dataset is CIFAR-10, as stated in the figure caption.\n- While the difference of the end performance of the two initialization is not huge (7% relative improvement for the median of 5 runs), we note that there is substantial difference in the difficulty of training. Network with ZeroInit is trained with the same learning rate and converge as fast as network trained with batch normalization, while we fail to train a Xavier initialized ResNet-110 with 0.1x maximal learning rate. Personal communication with the authors of (Shang et al., 2017) confirms our observation, and reveals that the Xavier initialized network need more epochs to converge.\n- Cutout and Mixup both contribute to the final performance in the CIFAR and SVHN experiments, as they likely supersede the regularization benefits of batch normalization. However, training with Xavier initialization cannot generalize as well, mainly because a substantially smaller learning rate has to be used to stabilize training, which in turn hurts generalization. We empirically validate this claim in the updated version.\n- We answered these questions in the general reply. In short, which layer to zero does not matter, training without step 3 works (though a bit worse). Using step 3 alone will not work due to incorrect scaling of the updates. We will add these experiments in the appendix.\n\n- References:\n[1] Schraudolph, N. N. (1998). Centering neural network gradient factors. In Neural Networks: Tricks of the Trade (pp. 207-226). Springer, Berlin, Heidelberg.\n[2] Balduzzi, D., Frean, M., Leary, L., Lewis, J. P., Ma, K. W. D., & McWilliams, B. (2017). The Shattered Gradients Problem: If resnets are the answer, then what is the question?. arXiv preprint arXiv:1702.08591.", " -- The reviewer thinks that among the 3 components of ZeroInit, only Step 2 is justified in a principled manner. Step 1 and Step 3 are not justified by an argument or experiments.\n\nWe will clarify the justification for each step in the paper. We hope you will find the following explanation helpful in understanding the effects and importance of each component. These improvements and new ablation experiments will appear in the revised paper.\n\nSummary: Step 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work. Step 3 is indeed not essential for training, but the bias parameters (empirically) create better loss landscape, and the multipliers help us avoid tuning the global learning rate schedule.\n\nWe now provide further in-depth justifications for each of the above arguments.\n\nStep 1 & 2:\n\nOn one hand, as explained in the paper, initializing the residual branches to 0 prevents them from exploding and minimizes the lower bound of the gradient in Theorem 2. On the other hand, 0 initialization helps Step 2 limit the norm of the update of the residual branches to O(eta/L), as we now explain:\n\nConsider a residual branch with m layers, our goal is to derive the correct scaling for these layers, so that the residual branch is updated by O(eta/L) per gradient step. For simplicity, we assume the network is a composition of scalar functions (i.e. the input, output and hidden layers are all scalars), and there is no activation function. The residual branch can therefore be written as:\n\nF(x) = a_1 * ... * a_m * x\n\nwhere x is the input to this residual branch, and a_1, ..., a_m are nonnegative scalars (thinking of them as the rescaling of default initialization). Furthermore, we denote the gradient of the objective function w.r.t. F(x) as g. It is then easy to show that the gradient w.r.t. a_i is g * F(x) / a_i. Now if we perform a gradient descent update with step size eta, and calculate the update to F(x) using first-order approximation w.r.t. eta, we will get:\n\n\\Delta F(x) =~ - eta * g * (F(x))^2 * ((1/a_1)^2 + ... + (1/a_m)^2)\n\nNote that we would like the scale of \\Delta F(x) to be O(eta/L). Assuming g is O(1), it then follows that the scale of M = (1/a_1)^2 + ... + (1/a_m)^2 should be O(1/(L * (F(x))^2)). Let A = min_i {a_i} and we have (1/A)^2 <= M <= m * (1/A)^2. Put together, we arrive at A = O(sqrt{L} * F(x)). We hence finally get the desired design constraints:\n\n(I.) A = min_i {a_i},\n(II.) F(x) / A = O(1/sqrt{L})\n\nIn sum, with (I.) and (II.) satisfied and assuming g is O(1), we can ensure the update of F(x) is O(eta/L), hence the update of the overall network is O(eta).\n\nA simple and natural design to satisfy these constraints is our Step 1 and Step 2. Furthermore, setting A to 0 (Step 1) has the additional benefit that each residual branch doesn't need to \"unlearn\" its random initial state, so that training proceeds faster in the first few epochs.\n\nStep 3: \n\nUsing biases in the linear and convolution layers is a common practice in neural network history. In normalization methods, bias and scale parameters are typically used to restore the representation power after normalization. For example, in batch normalization gamma and beta parameters are used to affine-transform the normalized activations per each channel.\n\nStep 3 is the simplest design which provides similar representation power to affine layers. Our design is a substantial simplification of the common practice, in that we only introduce O(K) parameters beyond conv and linear weights (note that our conv and linear layers do not have biases), whereas the common practice includes O(KC) (e.g. batch normalization and weight normalization) or O(KCWH) (e.g. layer normalization) additional parameters, where K is the number of layers, C is the max number of channels per layer and W, H are the spatial dimension of the largest feature maps.\n\nFinally, it is important to note that the bias and multiplier parameters are not essential for training to proceed -- without them the training still works, even with 10,000 layers, albeit with suboptimal performance.\n", "Dear AnonReviewer1, we thank you for the very detailed review, and find it valuable for improving the writing of our paper in an updated version. We are happy to hear that you find our observations interesting, and our empirical results strong. \n\nRegarding your concerns:\n------------------------------\n\n -- The reviewer seems to think our method is \"a combination of initialization and normalization\".\n\nThe proposed method does not use any normalization and so we believe there is a misunderstanding, either about the method, or about what is commonly regarded as normalization.\n\nWe do not divide any neural network component by its statistics, neither do we subtract the mean from any activations. In fact, with our method there is **no computation of statistics (mean, variance or norm) at initialization or during any phase of training**.\n\nIn a sharp contrast, all normalization methods for training neural networks explicitly normalize (i.e. standardize) some component (activations or weights) through dividing activations or weights by some real number computed from its statistics and/or subtracting some real number activation statistics (typically the mean) from the activations.\n\nTo elaborate, we provide a brief historical background on normalization techniques. The first use of such ideas and terminology in modeling visual system dates back at least to Heeger (1992) in neuroscience and to Pinto et al. (2008) and Lyu & Simoncelli (2008) in computer vision, where each neuron output is divided by the sum (or norm) of all of the outputs, a module called divisive normalization. Recent popular normalization methods, such as local response normalization (Krizhevsky et al., 2012), batch normalization (Ioffe & Szegedy, 2015) and layer normalization (Ba et al., 2016) mostly follow this tradition of dividing the neuron activations by their certain summary statistics, often also with the activation mean subtracted. An exception is weight normalization (Salimans & Kingma, 2016), which instead divides the weight parameters by their statistics, specifically the weight norm; weight normalization also adopts the idea of activation normalization for weight initialization. The recently proposed actnorm (Kingma & Dhariwal, 2018) removes the normalization of weight parameters, but still use activation normalization to initialize the affine transformation layers.\n\nTherefore, our method is substantially different from all aforementioned techniques, and should not be regarded as being close to a normalization method.\n\n- References:\n[1] Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual neuroscience, 9(2), 181-197.\n[2] Pinto, N., Cox, D. D., & DiCarlo, J. J. (2008). Why is real-world visual object recognition hard?. PLoS computational biology, 4(1), e27.\n[3] Lyu, S., & Simoncelli, E. P. (2008). Nonlinear image representation using divisive normalization. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.\n[4] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).\n[5] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.\n[6] Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.\n[7] Salimans, T., & Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems (pp. 901-909).\n[8] Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039.", "Hi,\n\nThis is an interesting paper. How would you compare your method to the method in [1] setting gamma=0 for every batchnorm going back to the main branch? On the surface the techniques look very similar and the authors in [1] also noted that such initialization improves optimization at the beginning of training.\n\n[1] Goyal et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. ", "Dear authors.\n\nThanks for an interesting paper. Incidentally, in the current resnet implementation (at least in TPU) in Tensorflow, the last batchnorm going back into the main branch as \\gamma initialized to 0, which I believe achieves a similar effect to what you are doing here, at least from an initialization perspective. This has been around since February of this year.\n\nhttps://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_model.py#L219\n\nIs the resnet in your experiments initialized like so? If not, how does such initialization compare to your initialization (without BN)?\n\nIn addition, please correct me if I'm mistaken, but the theoretical analysis of variances in this paper seems to have been done (quite thoroughly) in Yang & Schoenholz 2017 and Hanin & Rolnick 2018, where the theory in the former works for any nonlinearity and predicts the empirical results (for tanh and relu) highly accurately, while the latter mathematically characterizes the activation dynamics. The former paper is missing in the citation, while the latter only gets a passing mention. Could you comment on the novelty of the derivation in the current work and why it's not enough to use results from these two papers?\n\nYang & Schoenholz 2017 https://arxiv.org/abs/1712.08969\nHanin & Rolnick 2018 http://arxiv.org/abs/1803.01719\n\nThanks, and looking forward to your reply.", "This paper proposes an exploration of the effect of normalization and initialization in residual networks. In particular, the Authors propose a novel way to initialize residual networks, which is motivated by the need to avoid exploding/vanishing gradients. The paper proposes some theoretical analysis of the benefits of the proposed initialization. \n\nI find the paper well written and the idea well executed overall. The proposed analysis is clear and motivates well the proposed initialization. Overall, I think this adds something to the literature on residual networks, helping the reader to get a better understanding of the effect of normalization and initialization. I have to admit I am not an expert on residual networks, so it is possible that I have overlooked at previous contributions from the literature that illustrate some of these concepts already. Having said that, the proposal seems novel enough to me. \n\nOverall, I think that the experiments have a satisfactory degree of depth. The only question mark is on the performance of the proposed method, which is comparable to batch normalization. If I understand correctly, this is something remarkable given that it is achieved without the common practice of introducing normalizations. However, I have not found a convincing argument against the use of batch normalization in favor of ZeroInit. I believe this is something to elaborate on in the revised version of this paper, as it could increase the impact of this work and attract a wider readership. ", "\nThis paper shows that with a clever initialization method ResNets can be trained without using batch-norm (and other normalization techniques). The network can still reach state-of-the-art performance.\n\n\nThe authors propose a new initialization method called \"ZeroInit\" and use it to train very deep ResNets (up to 10000 layers). They also show that the test performance of their method matches the performance of state-of-the-art results on many tasks with the help of strong data augmentation. This paper also indicates that the role of normalization in training deep resnets might not be as important as people thought. In sum, this is a very interesting paper that has novel contribution to the practical side of neural networks and new insights on the theoretical side. \n\nPros:\n1. The analysis is not complicated and the algorithm for ZeroInit is not complicated. \n2. Many people believe normalization (batch-norm, layer-norm, etc. ) not only improves the trainability of deep NNs but also improves their generalization. This paper provides empirical support that NNs can still generalize well without using normalization. It might be the case that the benefits from the data augmentation (i.e., Mixup + Cutout) strictly contain those from normalization. Thus it is interesting to see if the network can still generalize well (achieving >=95% test accuracy on Cifar10) without using strong data-augmentation like mixup or cutout. \n3.Theoretical analysis of BatchNorm (and other normalization methods) is quite challenging and often very technical. The empirical results of this paper indicate that such analysis, although very interesting, might not be necessary for the theoretical understanding of ResNets. \n\n\nCons:\n1.The analysis works for positively homogeneous activation functions i.e. ReLU, but not for tanh or Swish. \n2.The method works for Residual architectures, but may not be applied to Non-Residual networks (i.e. VGG, Inception) " ]
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "BJl7K-FcCm", "Skgn80Is0X", "HJxywPsqaQ", "iclr_2019_H1gsz30cKX", "B1lgruiDa7", "iclr_2019_H1gsz30cKX", "BJeS4HjvpX", "SygoSH1b0m", "iclr_2019_H1gsz30cKX", "BJxPwIScaX", "B1xtAbjwaX", "H1eV6OUGpQ", "H1e_BZjmpQ", "Skl714KOnX", "SyeHla6K37", "r1gTIYutnX", "r1gTIYutnX", "r1gTIYutnX", "iclr_2019_H1gsz30cKX", "iclr_2019_H1gsz30cKX", "iclr_2019_H1gsz30cKX", "iclr_2019_H1gsz30cKX" ]
iclr_2019_H1l7bnR5Ym
ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees
Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN). In this paper, we propose a novel probabilistic framework for GANs, ProbGAN, which iteratively learns a distribution over generators with a carefully crafted prior. Learning is efficiently triggered by a tailored stochastic gradient Hamiltonian Monte Carlo with a novel gradient approximation to perform Bayesian inference. Our theoretical analysis further reveals that our treatment is the first probabilistic framework that yields an equilibrium where generator distributions are faithful to the data distribution. Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and ImageNet) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs.
accepted-poster-papers
The paper proposes a new method that builds on the Bayesian modelling framework for GANs and is supported by a theoretical analysis and an empirical evaluation that shows very promising results. All reviewers agree, that the method is interesting and the results are convincing, but that the model does not really fit in the standard Bayesian setting due to a data dependency of the priors. I would therefore encourage the authors to reflect this by adapting the title and making the differences more clear in the camera ready version.
train
[ "rJeRLZfI14", "ryxAXLZYhm", "rJg-57eh07", "Hkxok8co27", "Skg9rHqqC7", "Hyx9hN59A7", "B1g1aeEapX", "Hkgyn9ICTm", "B1lq-8vRa7", "Byg6_RX6Tm", "SJxci3qu3m" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Following is our response to your updated comments.\n\n=== Bayesian GAN prior in toy experiment ===\n\nWe agree that using a broad normal prior would be a better choice. We have rerun our experiment with the normal prior (with mean 0, std 1) and got the similar results as in the uniform prior case. We will certainly include the new result in our next version of the paper.\n\nFollowing is a remark on the modification we make on the toy model to use normal prior. We reparameterize the generator and discriminator. Generator with parameter theta^g produces a data distribution p(x_i; theta^g) = exp(theta^g_i) / sum_j exp(theta^d_j). Under a normal prior N(0,1), its prior probability is p(theta^g) = \\prod_j exp(- theta^g * theta^g / 2). Discriminator with parameter theta^d has a score function D(x_i; theta^d) = sigmoid(theta^d_i). Under a normal prior N(0,1), its prior probability is p(theta^d) = \\prod_j exp(- theta^d * theta^d / 2).\n\n=== why to strip the normal prior ===\n\nIt is a very good point. Thank you for your insightful comment on it. \n\nIt is true that the prior is crucial for a Bayesian model and should encode domain knowledge. What our empirical analysis shows is that a Gaussian prior is not helpful in the task of Bayesian GANs. At least, the normal prior does not show an advantage over the non-informative prior. Intuitively, putting a normal prior is very similar to have an L2 regularization when training a neural network. It looks helpful in the sense of robust to overfitting. However, we remark that, unlike typical supervised learning where model fitting is connected to generalization performance, an \"overfitting\" model is desirable for GANs that matches the data distribution perfectly.\n\nSince the normal prior does not work, we need a more involved prior that makes the Bayesian modeling work. Our solution is the “unorthodox” generator prior as you mentioned. Although it looks rather “unorthodox” at first blush, this generator prior is standard in the following senses: (1) As we previously explained to R3, our Bayesian model actually includes two separate models, one for the generator and one for the discriminator. Hence from the generator’s perspective, the generator is the ‘model’ and the discriminator is the ‘data’ and the other way around from the discriminator's perspective. Note that the real data distribution we want to learn is actually a third-party component; it is, therefore, proper to involve the real data in the prior. (2) Our generator prior encodes our prior belief in the sense that the generator distribution should be stable if the discriminator cannot distinguish the synthetic data and the real data well.\n\n=== robustness to overfitting ===\n\nWe want to emphasize that the setting we are handling with is different from the traditional prediction settings where Bayesian methods are applied commonly. When dealing with classification or regression, robustness to overfitting is quite important. However, in the GAN computation setting, the overfitting issue is not the main concern since the goal is to produce a distribution that matches the real data distribution perfectly, rather than generalizing to unseen data.", "Summary\n=========\nThe paper extends Bayesian GANs by altering the generator and discriminator parameter likelihood distributions and their respective priors. \nThe authors further propose an SGHMC algorithm to collect samples of the resulting posterior distributions on each parameter set and evaluate their approach on both a synthetic and the CIFAR-10 data set. \nThey claim superiority of their method, reporting a higher distance to mode centers of generated data points and better generator space coverage for the synthetic data set and better inception scores for the real world data for their method.\n\nReview\n=========\nAs an overall comment, I found the language poor, at times misleading.\nThe authors should have their manuscript proof-read for grammar and vocabulary.\nExamples: \n- amazing superiority (page 1, 3rd paragraph)\n- Accutally... (page 1, end of 3rd paragraph)\n- the total mixture of generated data distribution (page 3, mid of 3.1)\n- Similarity we define (page 3, end of 3.1)\n- etc.\nOver the whole manuscript, determiners are missing.\n\nThe authors start out with a general introduction to GANs and Bayesian GANs in particular, \narguing that it is an open research question whether the generator converges to the true data generating distribution in Bayesian GANs.\nI do not agree here. The Bayesian GAN defines a posterior distribution for the generator that\nis proportional to the likelihood that the discriminator assigns to generated samples.\nThe better the generator, the higher the likelihood that the discriminator assign to these samples.\nIn the case of a perfect generator, here the discriminator is equally unable to distinguish real and generated samples and consequently degenerates to a constant function.\nUsing the same symmetry argument as the authors, one can show that this is the case for Bayesian GANs.\n\nWhile defining the likelihood functions, the iterator variable t is used without introduction.\n\nFurther, I a confused by their argument of incompatibility.\nFirst, they derive a Gibbs style update scheme based on single samples for generator and discriminator parameters using\nposteriors in which the noise has been explicitly marginalized out by utilizing a Monte Carlo estimate.\nSecond, the used posteriors are conditional distributions with non-identical conditioning sets.\nI doubt that the argument still holds under this setting.\n\nWith respect to the remaining difference between the proposed approach and Bayesian GAN,\nI'd like the authors elaborate where exactly the difference between expectation of objective value\nand objective value of expectation is.\nSince the original GAN objectives used for crafting the likelihoods are deterministic functions,\nrandomness is introduced by the distributions over the generator and discriminator parameters.\nI would have guessed that expectations propagate into the objective functions.\n\nIt is, however, interesting to analyze the proposed inference algorithm, especially the introduced posterior distributions.\nFor the discriminator, this correspond simply to the likelihood function.\nFor the generator, the likelihood is combined with some prior for which no closed form solution exists.\nIn fact, this prior changes between iterations of the inference algorithm.\nThe resulting gradient of the posterior decomposes into the gradient of the current objective and the sum over all previous gradients.\nWhile this is not a prior in the Bayesian sense (i.e. in the sense of an actual prior belief), it would be interesting to have a closer look at the effect this has on the sampling method.\nMy educated guess is, that this conceptually adds up to the momentum term in SGHMC and thus slows down the exploration of the parameter space and results in better coverage.\n\nThe experiments are inspired by the ones done in the original Bayesian GAN publication.\nI liked the developed method to measure coverage of the generator space although I find the\nterm of hit error misleading.\nGiven that the probabilistic methods all achieve a hit rate of 1, a lower hit error actually points to worse coverage.\nI was surprised to see that hit error and coverage are only not consistently negatively correlated.\nAdding statistics over several runs of the models (e.g. 10) would strengthen the claim of superior performance.", "Thanks for your responses!\n\nI really like the way you clarify the differences in expectation over objective vs. objective of expected values. However, if you compare your method to the BGAN, you should use the priors as defined in their paper (i.e. broad normals in theirs vs. uniform in yours) when characterizing the BGAN.\n\nAs I mentioned in my initial review, I think you factually strip all priors from the model formulation that are independent of data. In my understanding, this is the basis of any Bayesian model, a data-independent prior that encodes prior belief on the parameters that is then updated by the data via the likelihood.\nThe implicit uniform prior on your discriminator distribution might be interpreted as non-informative (one could argue it to be a Jeffreys prior) but the definition of the prior on the generator distribution being the posterior state of the last update step is rather unorthodox and, more importantly, relying on data. \nThis might strip the inherent property of robustness to overfitting from the model which should be one of the main reasons to formulate a Bayesian model in the first place and I think that more elaboration on why this is still a Bayesian model (as they claim in the paper title) is needed here.\n\nI totally agree with Rev3 that this is nice model with impressive results, I'm just not convinced by the explanation of why this is. As is argued in the original Bayesian GAN paper, having uniform priors is effectively the same as using a classical GAN. One could see your approach as a clever (probabilistic) extension of the optimization procedure of the classical GAN.\n\nGiven that you have clarified on other of my concerns, I updated my rating.", "PRIOR COMMENT: This paper should be rejected based on the experimental work.\nExperiments need to be reported for larger datasets. Note the MGAN\npaper reports results on STL-10 and ImageNet as well.\n\nNOTE: this was addressed by the 27/11 revision, which included good\n results for these other data sets, thus I now withdraw the comment\n\nNote, your results on CIFAR-10 are quite different to those in the\nMGAN paper. Your inceptions scores are worse and FIDs are better!! I\nexpect you have different configurations to their paper, but it would\nbe good for this to be explained. NOTE: explained in response!\n\nNOTE: this was addressed by the 27/11 revision\n\nI thought the related work section was fabulous, and as an extension\nto BGAN, the paper is a very nice idea. So I benefited a lot from reading\nthe paper.\n\nI have some comments on Bayesian treatment. In Bayesian theory, the\ntrue distribution $p_{data}$ cannot appear in any evaluated formulas,\nas you have it there in Eqn (1) which is subsequently used in your\nlikelihood Eqn (2). Likelihoods are models and cannot involve \"truth\".\n\nLemma 1: Very nice observation!! I was trying to work that out,\nonce I got to Eqn (3), and you thought of it. \n\nAlso, you do need to explain 3.2 better. The BGAN paper, actually, is\na bit confusing from a strict Bayesian perspective, though for\ndifferent reasons. The problem you are looking at is not a\ntime-series problem, so it is a bit confusing to be defining it as\nsuch. You talk about an iterative Bayesian model with priors and\nlikelihoods. Well, maybe that can be *defined* as a probabilistic\nmodel, but it is not in any sense a Bayesian model for the estimation\nof $p_{model}$.\n\nNOTE: anonreviewer2 expands more on this\n\nWhat you do with Equation (3) is define a distribution on\n$q_g(\\theta_g)$ and $q_d(\\theta_d)$ (which, confusingly, involves the\n\"true\" data distribution ... impossible for a Bayesian formulation).\nYou are doing a natural extension of the BGAN papers formulation in\ntheir Eqs (1) and (2). This, as is alluded to in Lemma 1. Your\nformulation is in terms of two conditional distributions, so\nconditions should be given that their is an underlying joint\ndistribution that agrees with these. Lemma 1 gives a negative result.\nYou have defined it as a time series problem, and apparantly one wants\nthis to converge, as in Gibbs sampling style. Like BGAN, you have\njust arbitrarily defined a \"likelihood\".\n\nTo me, this isn't a Bayesian model of the unsupervised learning task,\nits a probabilistic style optimisation for it, in the sense that you are defining a probability\ndistribution (over $q_g(\\theta_g)$ and $q_d(\\theta_d)$) and sampling\nfrom it, but its not really a \"likelihood\" in the formal sense. A\nlikelhood defines how data is generated. Your \"likelihood\" is over\nmodel parameters, and you seem to have ignored the data likelihood,\nwhich you define in sct 3.1 as $p_{model}()$.\n\nAnyway, I'm happy to go with this sort of formulation, but I think you\nneed to call it what it is, and it is not Bayesian in the standard sense. The theoretical\ntreatment needs a lot of cleaning up. What you have defined is a\nprobabilistic time-series on $q_g(\\theta_g)$ and $q_d(\\theta_d)$.\nFair enough, thats OK. But you need to show that it actually works in\nthe estimation of $p_{model}$. Because one never has $p_{data}$, all\nyour Theorem 1 does is show that asymptotically, your method works.\nUnfortunately, I can say the same for many crude algorithms, and most\nof the existing published work. Thus, we're left with requiring a\nsubstantial empirical validation to demonstrate the method is useful.\n\nNow my apologies to you: I could make somewhat related statements\nabout the theory of the BGAN paper, and they got to publish theirs at\nICLR! But they did do more experimentation.\n\nOh, and some smaller but noticable grammar/word usage issues.\n\nNOTE: thanks for your good explanation of the Bayesian aspects of the model ...\nyes I agree, you have a good Bayesian model of the GAN computation , but it\nis still not a Bayesian model of the unsupervised inference task. This is a somewhat\nminor point, and should not in anyway influence worth of the paper ... but clarification\nin paper would be nice.", "#1 You have a good Bayesian model of the GAN computation, but it is still not a Bayesian model of the unsupervised inference task. \n\nYes, you are right. In this work, we aim to develop a better Bayesian model of the GAN computation. Generally, Bayesian models for unsupervised inference tasks could be a larger topic.\n\n#2 I want to see results on the big data sets.\n\nThanks for being positive of our work. We have included the results on STL-10 and ImageNet in our revision of the paper (e.g., Table 4 and Figure 4 of Section 5.2). As mentioned in the general response above, our model does provide better performance on both datasets with significant improvements of FID scores. We hope that with the additional results, the experimental work in the current version is more conclusive.\n", "We thank all the reviewers for the insightful comments and helpful suggestions. Here we summarize the major changes we did in the revision of our paper.\n\n1. Adding experiment results on STL-10 and ImageNet.\n\nWe follow R3’s suggestion to compare our models and baselines (MGAN, BGAN) on the larger datasets. Our model does provide better performance on both datasets. Especially, the improvement of FID scores looks significant. We include the new experiment results (Table 4 and Figure 4) in Section 5.2.\n\n2. Updating Inception score and FID results on CIFAR-10.\n\nThanks to R3’s help, we find the discrepancy between FIDs given by the PyTorch model and the Tensorflow model. We have switched to the official Tensorflow model for evaluation and updated all results in Table 3 (Section 5.2). We also put a remark in Section B.1 (of the appendix) to make it clearer.\n\n3. Emphasizing the difference between our model and Bayesian GAN.\n\nR2 suggests that we elaborate more about the difference between our likelihood design (objective value of expectation) and Bayesian GAN’s likelihood (expectation of objective value). We revise Section 4.2 to explain the differences both in the likelihood and in the prior more clearly.\n\n4. Adding a toy experiment to demonstrate different convergence behavior of our model and Bayesian GAN (Figure 1).\n\nWe include a new toy experiment on categorical distributions as empirical support for the superior convergence property of our model over the Bayesian GAN. \n\nIn our toy experiment, the data is sampled from a finite discrete space (more specifically, a categorical distribution). It is ideal to examine the Bayesian formulation in a finite case since the posterior can then be computed analytically and does not have error caused by inference algorithms. We try different combinations of likelihoods and priors in the experiment and compare their learned distributions. \n\nIn Figure 1, we visualize the generated data distributions of different models after they converge. The results show that only when using the combination of our likelihood and our prior can the model converge to the correct equilibrium. The full details of the experiment are included in Section D (of the appendix). This example also serves as an illustration of the convergence issue of Bayesian GAN.\n\nMinor changes:\n\n1. Change the term ‘hit error’ to ‘hit distance’ (e.g., in Table 2) to avoid the potential misunderstanding of its meaning.\n\n2. Add a few sentences in Section 4.1 to explain why Theorem 1 does not hold for Bayesian GAN.\n", "\nDear AnonReviewer2,\n\nThank you for the feedback. \nFollowing is our response to your concerns.\n\n=== convergence of Bayesian GAN ===\n\nThe convergence of Bayesian GAN is indeed a problem, which is one of our key contributions. Bayesian GAN has a subtle difference from the original GANs during learning. To compute the posterior, Bayesian GAN cannot be learned by vanilla gradient descent methods, but is learned by SGHMC. In SGHMC framework, the gradient is always adulterated by white noises. Thus if the gradient from discriminator is always zero, the generator distribution will converge to a Gaussian distribution instead of staying unchanged.\n\nIn contrast, we fix this issue by a well-crafted prior for the generator distribution. Intuitively, the gradient from the prior helps combat with the noise and prevent degeneracy of the generator distribution towards a Gaussian distribution. Please note theorem 1 does not hold without introducing suitable prior for the generator.\n\n\n=== expectation of objective value v.s. objective value of expectation ===\n\nThis difference is another very critical improvement from Bayesian GAN. We will make it more clear in the revision of the paper.\n\nAs shown in Eqn 8, to compute likelihood, Bayesian GAN takes expectation after computing the GAN objective value. While as shown in Eqn 2, we compute GAN objective value after the expectation. The subtle adjustment is crucial. Theorem 1 will not hold if the likelihood is defined as the expectation of loss value as Bayesian GAN did. Intuitively, because the expectation \\E_{q_g} p_{gen}(x;\\theta_g)) is equivalent to the data distribution p_model(x) produced by the generator distribution, it makes sense to compute GAN objective over it instead of the reversed order (in Bayesian GAN). Besides, it’s easy to see the gradients of the two different likelihoods is different since, for a given function f, the gradient of \\sum_i f(x_i) is usually different from that of f(\\sum_i x_i).\n\n=== clarification on incompatibility ===\n\nThe incompatibility corresponds to the incompatibility between two conditional distributions that can not belong to the same joint distribution. We identify a theoretical flaw of Bayesian GAN under a very simple setting (when only using single Monte-Carlo sample) that leads to incompatible conditionals of generator and discriminator. Moreover, we are not very certain about the concern “the used posteriors are conditional distributions with non-identical conditioning sets. I doubt that the argument still holds under this setting.” Further explanation about “non-identical conditioning sets” will be appreciated.\n\n=== relationship between hit error and coverage ===\n\nBy our definition, ‘hit error’ is the averaged distance between the generated data points (projected into a low dimensional space) and the low dimensional hyperplane that the ground truth mode lies in. While the ‘coverage error’ measures the similarity between the distribution of projected data points and the ground truth data distribution which is uniform.\n\nNote that these two metrics are actually orthogonal to each other, due to the fundamental difference between projection distances (‘hit error’) and how the projections are distributed (’coverage error‘). It’s possible to get the same projection distances in a scattered or dense way. It’s also possible to get the same projections from different projection distances. \n\nWe will change the terminology ‘hit error’ to ‘hit distance’ to make it clearer in our revision.\n\n=== further analyze of our inference algorithm ===\n\nThe momentum explanation seems an interesting direction to yield a formal explanation of such approximations, but we do not have a concrete analysis yet and leave it as future work. \n", "Dear AnonReviewer3,\n\nThank you for the insightful comments.\nFollowing is our response to your concerns.\n\n=== experiments ===\n\nWe will include results on STL-10 and ImageNet in the revision, or a later version if our machines cannot catch up the rebuttal deadline. Compared with Bayesian GAN, actually, we did a more thorough study on the choice of objective function, and our synthetic dataset is harder and more illustrative.\n\nHere we clarify the discrepancy between our quantitative evaluation of MGAN and that of the original paper. We actually use the official open-sourced code of MGAN with the same configurations (model architectures, training data). The discrepancy comes from the inception model used to compute FID. We compute FID with PyTorch Inception model (https://github.com/mseitzer/pytorch-fid.). The original MGAN paper did not say which inception model they have used. Our guess is that they used the Tensorflow inception model (https://github.com/bioinf-jku/TTUR). We observed FID computed by PyTorch model is much lower than that computed by the Tensorflow model, because of the different weights of the pre-trained models. A similar phenomenon has been recently observed for Inception Score [1]. To favor a more complete comparison, we will update our FID results by switching to the Tensorflow version.\n\nWe had posted the updated results in the comment. In our experiments, the MGAN with GAN-NS objective has the same setting with original MGAN. The Inception score and FID we get are 7.25 and 27.55 which are both worse than the scores reported in the original paper, 8.33 and 26.7. We train MGAN with the officially released code under the configuration reported in the MGAN paper (Table 4 in the appendix). The scores we reported is the best we can get via several training trials.\n\n[1] Barratt, Shane, and Rishi Sharma. \"A Note on the Inception Score.\" arXiv preprint arXiv:1801.01973 (2018).\n\n=== Bayesian formulation ===\n\nOur method has two separate Bayesian models, one for the generator and one for the discriminator. Take the Bayesian perspective for the generator as an example. The likelihood defined in the first equation of Eqn 2 gives the probability of observing some fixed discriminator distribution for some generator parameter, i.e., p(D^{(t)} | \\theta_g). Composite with the prior of the generator parameter q^{(t)}(\\theta_g), it is a Bayesian model from a strict perspective. Indeed, to see the correspondence of ‘model parameter’ and ‘data’ in classic Bayesian theory, our generator is the ‘model’ and the discriminator is the ‘data’. We estimate generator distribution by the observed discriminator distribution.\n\nThe novelty from classic Bayesian models is on the inference procedure. We integrate the two standard Bayesian models into a dynamical system: each Bayesian problem is solved alternatingly. From a game-theoretic point of view, each optimization problem is the best response strategy of the corresponding player, and the equilibrium presents a generator distribution that produces the target data distribution. \n\n=== Why time-series modelling ===\n\nThe problem is not a time-series problem. We simply solve it in an iterative manner. (akin to SGD that can iteratively solve both time-series and non-time-series problems). Our goal is to find the equilibrium of generator and discriminator distributions, where they satisfy each other’s posterior under our Bayesian criterion. It is, however, possible to find the equilibrium via an iterative scheme. We will make this part more clear in the revision.\n\n=== A clarification about theorem 1 ===\n\nIt is indeed true that Theorem 1 only shows an analysis of the optimal solution in an asymptotic scenario. Unfortunately, it is, to our best knowledge, the best property that has been obtained in recent literature on GANs [2, 3, 4, 5, 6]. However, please note that Bayesian GAN does not even possess such asymptotic property and the difficulty of avoiding such problem as revealed by our analysis in Section 4.2. In contrast, our method is to the first Bayesian method to establish such property. \n\n[2] Goodfellow, Ian, et al. \"Generative adversarial nets.\" Advances in neural information processing systems. (NIPS 2014)\n[3] Hoang, Quan, et al. \"MGAN: Training generative adversarial nets with multiple generators.\" (ICLR 2018)\n[4] Arjovsky, Martin, Soumith Chintala, and Léon Bottou. \"Wasserstein generative adversarial networks.\"(ICML 2017)\n[5] Mao, Xudong, et al. \"Least squares generative adversarial networks.\" Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.\n[6] Zhao, Junbo, Michael Mathieu, and Yann LeCun. \"Energy-based generative adversarial network.\" (ICLR 2017)\n", "Previously, our FID results are computed using a PyTorch Implementation (https://github.com/mseitzer/pytorch-fid). Note that there exists a large discrepancy between the FID results conducted by PyTorch Inception model and Tensorflow model. Hence, to facilitate the comparison with previous paper, we decide to reevaluate by the official Tensorflow FID computation code (https://github.com/bioinf-jku/TTUR). \n\nHere are the updated results.\n\n Inception scores (higher is better)\n GAN-MM & GAN-NS & WGAN & LSGAN \nDCGAN & 6.53 & 7.21 & 7.19 & 7.36 \nMGAN & 7.19 & 7.25 & 7.18 & 7.34\nBGAN & 7.21 & 7.37 & 7.26 & 7.46\nours-PSA & 7.75 & 7.53 & 7.28 & 7.36\n\n FIDs (lower is better)\n GAN-MM & GAN-NS & WGAN & LSGAN \nDCGAN & 35.57 & 27.68 & 28.31 & 29.11 \nMGAN & 30.01 & 27.55 & 28.37 & 30.72\nBGAN & 29.87 & 24.32 & 29.87 & 29.19\nours-PSA & 24.60 & 23.55 & 27.46 & 26.90\n\nNote that we are reporting the results with the highest ‘Inception score - 0.1 FID’ for each model. Thus the Inception scores results are also updated.\n", "Dear AnonReviewer1,\n\nThank you for agreeing with the significance of our contribution and voting to accept our paper. We will address the typos.\n\nWe make an additional remark here, which might be interesting. Bayesian modeling has been introduced in several mini-max problems in the deep learning community, such as adversarial (robust) learning [1] and GANs. However, most prior works pose Bayesian method as a heuristic without theoretical analysis. This work presents an important initial step toward a rigorous study of modernized Bayesian approaches. \n\n[1] Nanyang Ye, Zhanxing Zhu. Bayesian Adversarial Learning. 32nd Annual Conference on Neural Information Processing Systems (NIPS 2018)\n", "Mode collapse in the context of GANs occurs when the generator only learns one of the \nmultiple modes of the target distribution. Mode collapsed can be tackled, for instance, using Wasserstein distance instead of Jensen-Shannon divergence. However, this sacrifices accuracy of the generated samples.\n\nThis paper is positioned in the context of Bayesian GANs (Saatsci & Wilson 2017) which, by placing a posterior distribution over the generative and discriminative parameters, can potentially learn all the modes. In particular, the paper proposes a Bayesian GAN that, unlike previous Bayesian GANs, has theoretical guarantees of convergence to the real distribution.\n\nThe authors put likelihoods over the generator and discriminator with logarithms proportional to the traditional GAN objective functions. Then they choose a prior in the generative parameters which is the output of the last iteration. The prior over the discriminative parameters is a uniform improper prior (constant from minus to plus infinity). Under this specifications, they demonstrate that the true data distribution is an equilibrium under this scheme. \n\nFor the inference, they adapt the Stochastic Gradient HMC used by Saatsci & Wilson. To approximate the gradient of the discriminator, they take samples of the generator parameters. To approximate the gradient of the generator they take samples of the discriminator parameters but they also need to compute a gradient of the previous generator distribution. However, because this generator distribution is not available in close form they propose two simple approximations.\n\nOverall, I enjoyed reading this paper. It is well written and easy to follow. The motivation is clear, and the contribution is significant. The experiments are convincing enough, comparing their method with Saatsci's Bayesian GAN and with the state of the art of GAN that deals with mode collapse. It seems an interesting improvement over the original Bayesian GAN with theoretical guarantees and an easy implementation.\n\nSome typos:\n\n- The authors argue that compare to point mass...\n+ The authors argue that, compared to point mass...\n\n- Theorem 1 states that any the ideal generator\n+ Theorem 1 states that any ideal generator\n\n- Assume the GAN objective and the discriminator space are symmetry\n+ Assume the GAN objective and the discriminator space have symmetry\n\n- Eqn. 8 will degenerated as a Gibbs sampling\n+ Eqn. 8 will degenerate as a Gibbs sampling" ]
[ -1, 5, -1, 6, -1, -1, -1, -1, -1, -1, 9 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "rJg-57eh07", "iclr_2019_H1l7bnR5Ym", "B1g1aeEapX", "iclr_2019_H1l7bnR5Ym", "Hkxok8co27", "iclr_2019_H1l7bnR5Ym", "ryxAXLZYhm", "Hkxok8co27", "iclr_2019_H1l7bnR5Ym", "SJxci3qu3m", "iclr_2019_H1l7bnR5Ym" ]
iclr_2019_H1lJJnR5Ym
Exploration by random network distillation
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access the underlying state of the game, and occasionally completes the first level. This suggests that relatively simple methods that scale well can be sufficient to tackle challenging exploration problems.
accepted-poster-papers
Pros: - novel, general idea for hard exploration domains - multiple additional tricks - ablations, control experiments - well-written paper - excellent results on Montezuma Cons: - low sample efficiency (2B+ frames) - unresolved questions (non-episodic intrinsic rewards) - could have done better apples-to-apples comparisons to baselines The reviewers did not reach consensus on whether to accept or reject the paper. In particular, after multiple rounds of discussion, reviewer 1 remains adamant that the downsides of the paper outweigh its good points. However, given that the other three reviewers argue strongly and credibly for acceptance, I think the paper should be accepted.
test
[ "r1xxk91FRQ", "HJe07lnMCm", "Bkl1t7HL0Q", "rylpqi3tnX", "Skl-yi3rCQ", "BygSUUyXAX", "r1eafqNz0Q", "r1gvw_EzAm", "rJlaHIVM07", "H1ek-LEfRm", "rkgyTSVGRX", "rkg65WVMRQ", "SkxVd-4zCm", "Hye5WR-Y6Q", "H1e2w4Ak0X", "HkgYozuoaX", "S1lASKafpX", "Bkgy-aa-67" ]
[ "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Regarding Freeway, I feel like you've slightly dodged my point. In your other recent paper on exploration (\"Large-Scale Study of Curiosity-Driven Learning\"), you *have* benchmarked on Freeway in Table 2. Furthermore, the intrinsic rewards look slightly harmful, even with a coefficient of 0.01 for the intrinsic rewards and 1.0 for the extrinsic rewards. My question is, how does your best Montezuma agent using RND perform on Freeway? I'm curious if the intrinsic reward swamps the extrinsic reward and ends up being really detrimental, or just a little bit. Since there's more going on visually in this game, my guess is that the RND bonuses may stay relatively large.", "Thanks for your replies. I'll comment here seeing as this is where you addressed my main concerns.\n\nI appreciate that there are some difficulties in applying CTS to PPO, although given that Bellemare et al. were able to apply it to A3C, which also involves parallel actors, I would have thought you could try the same approach. (They spell out exactly how they did it in Appendix D4.) As it stands, comparing against A3C's score of 1127 at 200M frames is misleading for a few reasons:\n- Bellemare et al. apply their A3C agent to a wide range of games. Critically, this includes many dense reward games. To achieve better generality across the full suite, they scale the intrinsic reward down by a factor of 5. (See Appendix D4. Their A3C agent used beta = 0.01, whereas the DQN agent used beta = 0.05.) This undoubtedly hurt performance in sparse reward games. It's very unfair to compare an agent that was tuned for Montezuma's Revenge against one that was tuned for the full suite of games.\n- PPO is generally stronger than A3C\n- Bellemare et al. used gamma_E = 0.99, not 0.999. As your results show, this change makes a huge difference.\n\nI'm still concerned that the discount of 0.999 is overfit to sparse reward games. As you've said in the rebuttal to another reviewer \"We tried gamma 0.999 because prior works using learning from demonstrations on Montezuma's Revenge had suggested that this was effective for Montezuma's Revenge\". This only convinces me that you're fitting to one type of game. If gamma = 0.999 breaks the algorithm on dense reward games then you should really say so. It should be pretty easy to apply a vanilla PPO implementation with gamma = 0.999 to a few dense reward games and find out.\n\n\nEDIT:\nThe additional results you've provided above just brought another matter to my attention:\n\nOn page 5 you've said \"Most experiments are run for 30K rollouts of length 128 per environment with 128 parallel environments, for a total of 1.97 billion frames of experience\". But then in Figure 5, it looks like your strongest results were actually obtained with 1024 parallel environments, which equates to 16B frames (or TWO orders of magnitude more experience than Bellemare et al used).\n\nThroughout the rest of the paper, the figures don't show how many parallel actors were used, but based on the strength of the Montezuma scores in Figures 4 and 7, I'm assuming you used 1024 parallel environments. This should be made more explicit in all the places where you mention your headline scores. The statement that \"most experiments\" used 128 parallel environments threw me, because it's not true for your most significant results. I think this is a *really* interesting paper, but it worries me that future reviewers of competing papers are only going to see your headline score of 14,415 and not the training frames. In a research environment where papers unfortunately tend to be judged above all on achieving SOTA results, it's going to make it hard for small groups to compete, unless you make it clear that the paper is really about leveraging massive parallelism, not maximising sample efficiency. Clearly, the latter is something that still needs to be addressed in these kind of games.", "Thank you for your detailed responses. I will address responses by topic:\n\nBaselines\n\nThank you for performing the experiments attempting to mimic the same conditions as previous work. Although I appreciate that the engineering effort is difficult, it would have been significantly better to integrate CTS with PPO directly to obtain a better baseline. I agree with the anonymous commenter that the direct comparison with A3C+CTS is not really fair; particularly considering that:\n- CTS+PPO seems to be specifically tuned for Montezuma’s Revenge vs Bellemare et al’s agent being tuned for a suite of games\n- A3C is generally a stronger method than PPO\n\nOverall, I strongly recommend that the authors include a comparison to the method of Bellemare et al with PPO in the paper; it is impossible otherwise for a reader to compare the effectiveness of the two methods. \n\nFinally, when comparing to previous work you should be extremely clear about the fact that previous methods used an order of magnitude less frames in attempting to solve the task. Including a discussion of the effect of many more frames is important - particularly with the experiments that you included above in your comment.\n\nExperimental Details\n\nThank you for making many of the requested changes.\n\nThe PPO paper only states that the algorithms presented use GAE, it never gives exactly how the value function is trained (just by reading the paper one would think that PPO uses the GAE paper’s value function training method, which is not at all similar to what PPO uses). \n\nAs for the forward dynamics mechanism, reading the third paragraph in 3.6 does not give me a clear outline of the baseline mechanism; if you do not have room in the main text, it would be helpful to put a description of how the baseline is implemented (at a high level) in the appendix.\n\nMore generally, I believe that this paper should be written in a way that it is relatively easy to follow without having to look into the accompanying code repository.", "The algorithm proposed in this paper consists in driving exploration in RL through an intrinsic reward, computed as the prediction error of a neural network whose target is the output of a randomly initialized network (with the state reached by the agent as input). The intuition is that rarely seen states will have a large prediction error, thus encouraging the agent to visit them (until they have been seen often enough that the error goes down). Among potential benefits of this method, compared to previously proposed intrinsic curiosity techniques for RL, are its simplicity and its robustness to environment stochasticity. Extensive experiments on the Atari game Montezuma’s Revenge investigate several variants of this idea (combined with PPO), with the best results significantly outperforming the current state-of-the-art. Other results on five other hard exploration Atari games show competitive performance as well.\n\nThe proposed technique definitely exhibits impressive performance on some tasks, in spite of its simplicity. Despite lacking theoretical grounding, I believe such results should be quite interesting to the RL research & applied community, as a novel and easy way to encourage exploration in sparse rewards tasks. I also really appreciate that the authors have included “negative” results contradicting their expectations, and are sharing their code: this is the kind of openness that in my opinion should be highly encouraged.\n\nThe paper is overall well written and easy to follow, except (from my point of view) section 2.2.2, which I found rather confusing and not very convincing. First, eq. 1 is a bit surprising since one expects the posterior to be in the same family of functions, i.e. of the form f_theta rather than f_theta + f_theta*. After a (very superficial) look at Osband et al (2018) I see that this particular lemma holds for linear functions, and the extension to nonlinear function approximation seems to be essentially based on intuition. Then the sentence “the optimization problem (...) is equivalent to distilling a randomly drawn function from the prior” ignores the sign mismatch (we are actually distilling the opposite of f_theta*, though I agree it can still make sense with a symmetric prior around 0, which is not mentioned). Finally, the reasoning to reach the conclusion “the distillation error could be seen as a quantification of uncertainty in predicting the constant zero function” seems somewhat unconvincing to me, considering the significant differences compared to Osband et al (2018), in particular: sharing weights among models in the ensemble, ignoring the specific regularization term R(theta), and not adding noise to the training data. As a result I find this link rather weak and I would appreciate if this section could be improved (at the very least with a better explanation of its limitations)\n\nAmong the various findings from experiments, one puzzled me in particular: the striking difference between episodic and non episodic intrinsic rewards in Fig. 3. I think this would have deserved a more thorough empirical investigation than the intuitive explanation from 2.3 (e.g. by checking whether the agent trained with non episodic rewards was indeed taking more risks and thus dying more often). What I find particularly surprising is that the beginning of the game should not yield much intrinsic reward relatively fast, since it should be the part the agent sees most often initially. As a result, I would expect that getting zero reward when dying (episodic rewards) should not be much different from getting future (small and discounted) intrinsic rewards, unless maybe early in training. What am I missing here?\n\nI also have some comments regarding a couple of other findings and associated hypotheses:\n- Section 3.3 shows some surprising results when varying discount factors (“This is at odds with the results in Figure 3 where increasing gamma_I did not significantly impact performance”). I wonder however to which extent these may be caused by the difference in the scale of discounted returns: for instance increasing gamma_I from 0.99 to 0.999 will (roughly) multiply V_I by 10, giving it more weight in the sum V = V_E + V_I. A fair comparison would either rescale V_I accordingly , or use a weighted sum and optimize the weights (the hyper-parameters table in the Appendix suggests that weights were actually used, but they are not mentioned in the main text and it is not clear how they were chosen).\n- 3.7 shows an interesting behavior (“dancing with skulls”). The authors hypothetize it may be due to the inherent danger of such behavior. But could it be also (and possibly more) related to the fact the skulls are moving? (which leads to many varied different states, that the predictor network will take time to learn perfectly).\n\nHere are a few more questions for the authors regarding specific details:\n1. In 3.1, “The best return achieved by 4 out 5 runs of this setting was 6,700.” What does this mean?\n2. In 3.5 a downsampling scheme is used to keep the training speed of the predictor network constant when increasing the number of actors. This raises the question of the impact of this training speed on the results, which is not investigated in the current experiments: do hyper-parameters influencing the predictor’s training speed (e.g. downsampling ratio, learning rate) need to be very carefully tuned, or are results robust across a wide range of speeds?\n3. In A.5 there is mention of “a CNN policy with access to only the last 16 most recent frames”: does that mean the number of “frames stacked” (Table 2) was increased from 4 to 16? If so, why? (it is not clear to me what we learn compared to Fig. 4)\n4. Your technique implicitly relies on the assumption that the predictor network’s weights will never be exactly the same as the target network’s (as otherwise nothing will be novel anymore, regardless of the states being visited). Do you foresee potential issues with this, and if yes do you have any idea to solve them? (a short discussion in the paper on this topic would be good as well)\n\nAnd finally some suggestions for small improvements:\n- Please try to find another name than “target” network since it is already widely used in the deep RL literature for something completely different (suggestions: “random”, “distillation”, “feature”, “reference”)\n- In 2.1 (last paragraph) there are various papers cited regarding forward or inverse dynamics, but several of them contain both, while the way they are cited suggests they deal only with one. Just moving “and inverse dynamics” before the full list of citations would fix it.\n- In first paragraph of section 3 please mention that the algorithm is based on PPO\n- In Fig. 3 the x axis seems to be missing a multiplication by 1K (?)\n- At end of 3.2, “having two value heads is necessary for combining reward streams with different characteristics”, please specify what are these characteristics. \n- On p. 7, last paragraph: please (briefly) explain how the “random features” are computed\n- The reference Ostrovski et al appears twice\n- In Alg. 1, “Update reward normalization parameters using it”: the “s” in parameters can be misleading, suggesting that both mean and standard deviation are used for normalization => explicitly saying “Update running standard deviation” would avoid such confusion (or say it on the “Normalize” step below)\n- Alg. 1 is not very clear on how returns and advantages are computed (and the corresponding code is not super easy to read). It also seems to be missing the update of the critic V.\n- Alg. 1 mentions “number of optimization steps” while Table 4 says “Number of optimization epochs”: I guess they are the same, so they should probably have the same name\n- After reading the paper, I felt like one learning was that CNN models worked better than RNN ones. However Table 5 shows that this can vary between games (ex: RND RNN outperforms RND CNN on Gravitar and Solaris) and/or algorithms (ex: PPO RNN outperforms PPO CNN on 3 games). I think the main text should at least point to this table when mentioning the superiority of the CNN.\n- In the “Related work” section there is a very short paragraph about “vectorized value functions”. It seems to be overlooking the whole field of multi-objective reinforcement learning. Maybe you could cite a related survey paper like “A Survey of Multi-Objective Sequential Decision-Making”.\n- The paper’s title and the OpenReview submission name should probably match\n\nUpdate following author and reviewer discussion: I agree with others regarding the weakness of the empirical comparison to pseudo-counts in particular, but still believe that the paper deserves to be accepted due to the fact that (1) some of the results are really good, and (2) this is a simple original idea that has the potential to drive further advances (hopefully addressing the empirical and theoretical limitations of the current work)", "Thanks for the replies & revision! A few follow-ups:\n\n\"It’s likely that the fact the object is moving is an important factor, but that doesn’t explain why the agent prefers to dance with the skull rather than observe it from a safe position.\"\n\nGood point, though a counter-argument could be that the combination of both the moving agent *and* skulls in a small region of the screen leads to a more challenging modeling task than having them far from each other. An experiment with non-deadly skulls might settle the argument but it's probably not worth it :)\n\n\"We mean that the high score of the training run was 6700 is 4 out of 5 random seeds\"\n\nOk thanks (with \"in\" instead of \"is\"). I think it may be worth clarifying in the paper, basically I got confused because I didn't imagine that 6700 was a score shared by 4 different runs, so I thought it was a best over the seeds, but then the \"4 out of 5\" made no sense (side note on a typo: \"of\" is missing in the paper). You could write for instance: \"During training, 4 out of the 5 agents (trained with different seeds) reached the same peak score of 6700[, while the 5th one reached at most XXX]\" (add the high score of the last agent if available).", "Thanks for your response - I find it all very reasonable.\n\nAs far as I'm concerned: the more you can include these sentiments in the paper the better.\nThis might mean adding a footnote or section in an appendix.\nI also think it would be valuable to highlight some of these shortcomings / open challenges in your conclusion - it does not take any shine away from these impressive results, but can help to shape the direction of future research!", "We would like to clarify that what we meant by a noisy-TV problem was attraction of dynamics prediction-based exploration methods to stochastic transitions. You are correct that a source of infinitely many states like white noise could be attractive to RND (although whether the transitions are deterministic or stochastic in this case doesn't matter). We will update the example in the paper to reflect this.", "Thank you for comment, we have submitted replies to the reviewers which address many of your points. \n\nRegarding your question as to why we didn't run experiments on \"Freeway\", we did this simply because RL approaches have saturated performance on this game even without directed exploration.", "Thank you for your comments, we are glad that you enjoyed this work.\n\n“The main problem which I see is the presentation of learning curves as a function of training steps rather than acting steps”\n\nWe have updated the paper to include the number of frames in the figures. \n\n“How are the results sensitive to the scale of pseudo-rewards? What would happen if they were simply multiplied or divided by 10?”\n\nWe use a reward normalization scheme that brings the intrinsic reward to a predictable range. If we were to up-weigh the coefficient of the intrinsic reward in the value calculation, the results would indeed change. We found the method to be relatively stable to changes in hyper-parameters, but it is of course possible to break it by large changes.\n\n“Also, what was the distributed training setting that you used to train your agent? Were the actors running on a single machine or on multiple machines?”\n\nThe majority of the experiments were done on a single gpu, and the others used 8 gpus on a single machine (for the larger scale experiments, or to complete the smaller scale experiments more quickly as we approached the deadline). The parallelization was handled using MPI and was completely synchronous.\n\n“Figure 2. It would be nice to see if both x and y axes was plotted in log scale in order to visualize any power-law (if one exists) between samples and MSE.”\n\nWe have included a log-log scale plot at the link below\nhttps://pasteboard.co/HO77MWw.png\n", "Thank you for your helpful review and we are glad that you like the paper.\n\n“- For a paper on exploration, it does not make sense to present results in terms of \"parameter updates\". This should instead be presented in terms of actor/environment steps. ”\n\nWe have updated the paper to include the number of frames used.\n\n“Like other \"count-based\" methods, this exploration bonus is not linked to the task. As such, you have to get \"lucky\" that you do the right kind of generalization from the \"random network\".”\n\nOur bonus is indeed not linked to the task in the way that Osband et al’s is. However since the problems we are most interested are those where the extrinsic reward is very sparse, these bonuses will behave similarly. Most uncertainty about a sparse reward function (and hence about the optimal policy) comes from finding instances of positive reward. However in dense reward settings the RND bonus might cause the agent to over-explore leading to slower learning in some situations, but we have not investigated this effect.\n\n“The whole section about \"pure exploration\" is somewhat interesting, but you shouldn't assess that performance in terms of \"reward\"... because that is just a peculiarity of these games... we could easily imagine a game where \"pure exploration\" gives a huge negative reward... but that wouldn't mean that it was bad at pure exploration! Therefore, how can you justify the quality of pure exploration by reference to the \"best return\".”\n\nWe agree that in general the extrinsic reward is not an ideal measure of pure exploration. For this reason we also included the number of rooms discovered by the agent which we feel is a much better metric.\n\n“The main missing piece is a clear discussion of any of the algorithms potential weaknesses - is this the final solution to exploration? What do you think about the issues of generalization? How would this perform in a linear system? What if the basis functions are not aligned?”\n\nWe would definitely not want to give the impression that this is a final solution for exploration. We believe it is a scalable alternative to count-like bonuses without some of the issues plaguing dynamics-based prediction bonuses. Similar to other methods (for example count-based), our method could be improved by providing a fine-grained way to control generalization to new states and the injection of prior information. A method combining the theoretical grounding of information-gain style approaches with the computational tractability of heuristic methods such as RND is highly desirable but currently out of reach.", "Thank you for your detailed review, we have updated the paper to accommodate your suggested improvements.\n\nWe have rewritten section 2.2.2 to emphasize that the generalization to non-linear functions is heuristic, and added details to the connection with lemma 3 from Osband et al. We agree that there are important differences but we suspect that a similar mechanism may underlie the performance of both approaches and wanted to bring the connection to the reader’s attention.\n\n\"the striking difference between episodic and non episodic intrinsic rewards\"\n\nIn our experience episodic intrinsic only agents typically became trapped in the first room. We believe that this is because early on in training the effective penalty for non-episodic agents is low, allowing to explore the immediate surroundings of the starting states. Further exploration is contingent upon successful exploration of the immediate surroundings and so the softening of the game-over penalty early in training can have a large compounded effect. You are correct that this deserves further investigation.\n\n“Section 3.3 shows some surprising results when varying discount factors (“This is at odds with the results in Figure 3 where increasing gamma_I did not significantly impact performance”). I wonder however to which extent these may be caused by the difference in the scale of discounted returns: for instance increasing gamma_I from 0.99 to 0.999 will (roughly) multiply V_I by 10, giving it more weight in the sum V = V_E + V_I. A fair comparison would either rescale V_I accordingly , or use a weighted sum and optimize the weights (the hyper-parameters table in the Appendix suggests that weights were actually used, but they are not mentioned in the main text and it is not clear how they were chosen).”\n\nWe agree that it’s possible the effect that you mention could be contributing to the difference in results and we have updated the text to reflect this possibility. The heuristic you mention of multiplying by 10 might be reasonable in some cases, but in general we would have to perform a hyperparameter sweep per game to get the right value (since it will depend significantly on the sparsity of the reward). We tried gamma 0.999 because prior works using learning from demonstrations on Montezuma’s Revenge had suggested that this was effective for Montezuma’s Revenge where a typical episode of a well-performing agent lasts between a 1000 and 4000 steps. \n\n“3.7 shows an interesting behavior (“dancing with skulls”) ... But could it be also (and possibly more) related to the fact the skulls are moving? (which leads to many varied different states, that the predictor network will take time to learn perfectly).”\n\nIt’s likely that the fact the object is moving is an important factor, but that doesn’t explain why the agent prefers to dance with the skull rather than observe it from a safe position.\n\n“1. In 3.1, “The best return achieved by 4 out 5 runs of this setting was 6,700.” What does this mean?”\nWe mean that the high score of the training run was 6700 is 4 out of 5 random seeds. The maximum score achieved is a useful metric in the pure exploration setting.\n\n“2. In 3.5 a downsampling scheme is used to keep the training speed of the predictor network constant when increasing the number of actors. ... do hyper-parameters influencing the predictor’s training speed (e.g. downsampling ratio, learning rate) need to be very carefully tuned, or are results robust across a wide range of speeds?”\n\nIn our preliminary experiments with 32 actors we observed good performance with no downsampling and default learning rate. When we increased the number of actors we simply increased the downsampling rate to match the effective batch size with 32 actors to avoid an additional confounding factor. Likely performance can be improved by tuning this parameter but we did not run many experiments with alternative setups.\n\n“3. In A.5 there is mention of “a CNN policy with access to only the last 16 most recent frames”: does that mean the number of “frames stacked” (Table 2) was increased from 4 to 16? If so, why? (it is not clear to me what we learn compared to Fig. 4)”\n\nBy 16 frames we refer to the standard setup for RL in Atari games. Each observation is a max over 4 frames, and the agent sees a stack of 4 such observations, giving a total context of 16 frames.\n\n“4. Your technique implicitly relies on the assumption that the predictor network’s weights will never be exactly the same as the target network’s (as otherwise nothing will be novel anymore, regardless of the states being visited). Do you foresee potential issues with this, and if yes do you have any idea to solve them?”\n\nIf the distribution of states is wide enough, the optimization might drive the networks to approach each other asymptotically. However we have not observed this happening in any of our experiments - the reward signal remained meaningful even after tens of billions of frames of processed experience.\n", "Next we would like to address your concerns on a lack of experimental details.\n“The way that the value function is trained (i.e. the objective function) is never explained in the paper. The value function in PPO is typically (according to the baselines repository) trained at each step to fit (GAE advantage + previous value), but in the paper this is not elaborated on.”\n\nIn this paper we omit the description of how PPO works because it is a standard method used in RL and we provide a reference to the original paper, as well as our implementation in the code accompanying the paper. As described in the original PPO paper we use generalized advantage estimation for fitting the baseline.\n\n“the statement that the extrinsic value function fits a stationary distribution on page 5 should be fixed”\nOur statement is that “the extrinsic reward function is stationary”, which we believe is accurate. The targets for the value function estimate through GAE are indeed non-stationary (since the value function feeds into these estimates), but our statement is specifically about the stationarity of the reward function.\n\n“In Table 4 the $\\lambda$ hyperparameter is listed, but is not described at all in the paper.”\nWe’ve added a reference to the PPO paper to table 4 to clarify the meaning of $\\lambda$\n\n“though it is possible to infer, the paper never explicitly defines the intrinsic reward $i_t$ in the main paper text.”\n\nWe’ve added the definition to the beginning of section 2.2.\n\n“The exact mechanism through which the \"forward dynamics\" baseline is never given” - \nWe believe that the third paragraph in section 3.6 describes the baseline in enough detail to reproduce it. The only difference from RND is that the target of the prediction problem is the features of the next state, rather than the current state, and that the action is additionally fed to the predictor network.\n\nMore generally while we sympathize with your desire for completeness, the reason for somewhat concise descriptions of some of the technical points is the page limit on the main part of the paper. We had to balance the level of detail between addressing the intuitions behind our method, the interpretation of the experimental results, and descriptions of technical details. To aid the reader in understanding the technical details, we provide the full source code with the paper and moved other details to the appendix.\n\n“- Table 5 states that the values given are means, but does not say how many samples each mean was generated from until Table 6. The contents of Table 6 should be in the figure captions; it is important to understand how many samples graphs are generated with”\n\nWe moved this information from captions to to the appendix in order to save space in the main text, but at your request we have moved this information back into the captions.\n\n“the way that the shaded regions are calculated should be included up front in the first figure with them in it”\nWe have moved this information into the beginning of section 3.\n\n“How are the graph lines calculated? I am not sure, but they look like they have been smoothed out - the captions should indicate this if so. If they are smoothed, are the standard deviations calculated before or after smoothing?”\n\nWe have added a description of smoothing procedures to section 3.\n", "Thank you for your thoughtful feedback.\n\nWe would like to begin with addressing the comparison of our method to existing exploration baselines. This has been the main concern expressed in your review, as well as a concern raised in a public comment.\n\nThe previous SOTA result on Montezuma’s Revenge comes from Bellamare et al (2016). The technique is using a simple density model (CTS) to derive a pseudo-count bonus. They report two results, one with DQN as the policy optimizer and one with A3C. We ran an experiment with RND and 16 parallel actors (to match the 16 actors used in the A3C result). Below we compare the performance of RND (averaged over 5 seeds) with the published CTS results:\nRND at 150M frames: 4192\nDQN+CTS at 150M frames: 3705\nRND at 200M frames: 3831\nA3C+CTS at 200M frames: 1127\n\nAs the comparison shows, RND’s performance is comparable to DQN+CTS at 150M frames of experience (it’s hard to know whether the difference is statistically significant due to large variance of results and larger instability of training with 16 actors compared to the setup used in our paper). We believe that the comparison to A3C+CTS is more meaningful, because both PPO and A3C are actor-critic methods collecting experience in the same way (from 16 parallel actors). In this comparison it is clear that PPO+RND performs better than A3C+CTS. We noticed that increasing the number of parallel actors trades off stability of training with sample efficiency, but even the results reported in the paper with 32 parallel workers are comparable or better than all previously reported results (mean score of 3263 at 150M frames of experience, 3688 at 200M).\n\nThe purpose of our paper was to improve on scalable exploration methods. Among the scalable exploration bonuses, prediction-based bonuses are a prominent example. However these bonuses are subject to the noisy-TV problem, and so the focus of this paper was to address this problem. As such the dynamics-based baseline is the relevant baseline demonstrating the existence of the problem that we purport to fix.\n\nBelow we provide more details on why density-based pseudocounts baselines (Bellemare et al, Ostrovski et al) are not as scalable as RND, which explains why we didn’t include them as baselines in our paper.\n_____\nScalability of density-based pseudocounts, info-gain approximations, and RND.\n\nThe second paragraph of introduction in our paper argues for the importance of scalability of modern deep RL methods.\n\nPseudocount-based rewards are derived from the value of a density estimator of observed states before and after updating this density on the most recent observation. A difficulty of this approach is the computation of the pseudocounts on a batch of experience coming from parallel actors. There are several approaches to this computation. In one approach each actor maintains its own density model. This makes the computation embarrassingly parallel, but the memory requirement scales linearly and the different workers optimize different reward functions, which might diverge from each other. Another approach is to share a density model and update it sequentially in an arbitrary order. This makes memory requirements tractable, but the computation time scales linearly with the number of workers. Finally a compromise between these approaches shares the density model between workers, calculates the reward from a pre-update snapshot of the model for each experience in a batch in parallel, and then updates the model on the whole batch of experience. The memory requirements for this approach scale linearly with the number of workers. For this reason it would be impractical to run these baselines for billions of frames, especially for expressive density models with sizeable numbers of parameters.\n\nApproaches that approximate info-gain exploration bonus by comparing prediction errors before and after an update of a learned dynamics model are subject to the same fundamental scalability limitation.", "The results on Montezuma's Revenge are definitely very cool. However, I'm concerned that this one centrepiece result may be overshadowing several problems with the work in its current form.\n\nFirst of all, I wish there was a proper comparison done between RND bonuses and previous state-of-the-art novelty bonus schemes. Unless I've misinterpreted the start of Section 3, your agents were trained for 1.97 *billion* frames of experience. This is about ten times more experience than Ostrovski et al.'s agents were trained on, so it's hard to tell whether distillation bonuses are actually any more effective than the neural density bonuses or Bellemare et al.'s CTS scheme. Furthering my suspicion, instead of comparing against these novelty schemes, which are well-known and were previously state-of-the-art on Montezuma, you've compared against bonuses from training a forward dynamics model, with the justification: \"Burda et al. (2018) show that training a forward dynamics model in a random feature space typically works as well as any other feature space when used to create an exploration bonus.\" In reality though, Burda et al.'s results with this method on Montezuma are very underwhelming. It also bothers me that you've labelled your graphs in a way that obfuscates the amount of experience trained from. Don't get me wrong -- it's impressive that you've managed to train an agent to finish the first level of Montezuma -- but I suspect that this is mostly attributable to two factors (1) Running for more training time than previous agents (2) Setting the extrinsic reward discount to 0.999 instead of 0.99. Figure 4(a) seems to support this conclusion.\n\nAnother major concern I have with the paper is how much it focuses on one game. Montezuma’s Revenge seems like a best case for RND, because the vast majority of pixels are static background and the few enemies that exist follow set paths. Therefore, most pixel-level novelty is driven by the protagonist’s movement. As such, it is not surprising that \"naive\" novelty heuristics, such as CTS and RND, do well in this game. However, such schemes may struggle in games like Freeway, where there are a lot of moving entities. (Martin et al.’s 2017 agent struggled in Freeway because it was “awed” by all the different cars driving past -- see \"Count-Based Exploration in Feature Space for Reinforcement Learning\".) Again, certain choices that you've made only heighten my suspicion: In Ostrovski et al. (2017) they actually classify *seven* games as being sparse reward and hard exploration: Gravitar, Montezuma’s Revenge, Pitfall, Private Eye, Solaris, Venture and Freeway. Why did you select all of these games except for Freeway?\n\nBreaking down your results on the other games tested doesn't do much to allay my concerns:\n- Pitfall: None of the agents learn anything, which is no better or worse than in previous work.\n- Private Eye: Ostrovski et al.'s PixelCNN agent reaches a score of around 15,000 points after around 30 million frames, whereas your agent takes over a billion frames to learn anything and doesn't beat this score.\n- Solaris: From Figure 7, it looks like RND is detrimental, if anything.\n- Gravitar: The RNN agent with RND is only marginally better than the RNN agent with RND. Further, \"state-of-the-art\" performance is only a result of training time. Ostrovski et al.'s Reactor-PixelCNN agent appeared to be on a very similar score trajectory at 150M frames.\n- Venture: Again, \"state-of-the-art\" performance is only a result of training time. Ostrovski et al.'s Reactor-PixelCNN agent reached 1400 points by only 150M frames, and appears to be on a very similar score trajectory to your agent.\n\nIn Ostrovski et al.'s work, they also test their agent on many non-sparse games. While it is not expected that exploration-focused agents will excel in dense reward games, it is important to validate that they do not significantly underperform. In your work, one setting that I believe may be particularly overfit is gamma_E = 0.999. In sparse reward games, using a very mild discount is ok, because the returns will never “blow up”. However, in dense reward games, using such a mild discount will cause the returns to grow very large and thus potentially cause instability. I’m very curious how your configuration would perform on Video Pinball, for example. In your blog post, you've only showed how the agent performs on dense reward games when the extrinsic rewards are turned *off*, which smells like deliberate cherry picking. (To be clear, I'm not saying that you *have* cherry-picked, but I think you should try to avoid this perception.)", "Throughout the paper and in the associated blog post, you've used the \"noisy TV problem\" as a motivating example. On page 8, you've noted that the dynamics-based agent gets stuck exploring the transition between rooms, because it can't accurately predict which room it will be in on the next frame. I can understand why RND avoids this problem: If the prediction network has seen room A and room B many times then it knows roughly what the random network is going to output in these rooms. However, if the agent is faced with *true* white noise, then won't the prediction error generally stay large? (Yes, given enough training time, it is true that the agent will have previously seen a noisy screen that is arbitrarily similar to the current one. Therefore, given enough representational capacity, the prediction error should *theoretically* go to zero. However, I really doubt that this is the case in practice. In Figure 1, it appears that even deep in training, there are still some states in the first room that yield a large prediction error. And this is the case despite the fact that the agent has seen similar screens many times before. In any event, you don't just need to show that the prediction error goes to zero on white noise -- you need to show that it becomes smaller than the prediction error elsewhere in the state space. Otherwise, the agent will still be encouraged to stare at the TV.)\n\nIn your video on the blog post, it looks like the TV isn't actually showing white noise, but rather a random image from a fixed set. Again, I can understand that the RND agent will eventually learn the random network's encoding of each image in the set, so it will eventually get bored with looking at the TV in this case. Unless I'm wrong above though, I think you should remove the term \"white noise\" and replace it with the example of a TV showing a random image from a fixed set.", "My apologies for posting late, I was seriously injured around the reviewer deadline.\n\n---------------------------------\n\nThe authors propose \"random network distillation,\" a method that adds an additional reward based on a proxy for \"exploration\" to the RL task at hand. The method works by including an extra term in the reward during training. The term is calculated as follows. A randomly initialized network is created during rollout generation. Another network is initialized as well, and during rollouts is trained to predict the output of the randomly initialized network applied to the states. The agent then uses a measure of the prediction loss as an intrinsic reward. These rewards are then included as part of the trajectory, and are predicted separately for training purposes.\n\nThe authors find that when you combine these intrinsic rewards with agents trained at extremely large scale (~2 billion frames per training run!) it is possible to perform very well on Montezuma's revenge and other sparse reward tasks.\n\nOverall, the paper has great potential - it presents the first algorithm to solve a challenging sparse reward RL task. However, while the method itself is promising, the weak baselines (in particular, the lack of evidence disentangling the benefits of larger scale / more frames vs the benefits of the proposed method) and unclear presentation make me unable to yet recommend the paper for acceptance.\n\nPositive:\n - The work reaches the state-of-the-art on several sparse reward tasks, most notably Montezumas revenge\n - On Montezumas revenge, the method is able to pass through the first level, and explore the vast majority of rooms.\n - The reward mechanism seems to be novel\n\nNegative:\n - All previous work used more than an order of magnitude more frames in training. From the experiments given, it is impossible to distinguish the impact of RND vs larger scale training\n - The baselines are not very strong: The forward dynamics baseline does significantly worse on Montezumas revenge than the previous results in Ostrovski et al and Bellemare et al, even using more than an order of magnitude more frames.\n - Important experimental details lack adequate descriptions\n - Tables and figures are not written with adequate details\n\nDetails of negative feedback:\n\nMajor:\n-------------\nUnclear baselines and questionable improvement on SOTA:\n\n - Previous work (the neural density functions of Ostrovski et al or the CTS scheme of Bellemare et al.) used significantly fewer (~100 million and ~150 million respectively vs ~2 billion) frames of experience in solving Montezumas Revenge, which makes this method’s benefit somewhat incomparable to previous methods given the sampling regime it operates in.\n - It is important to disentangle the impacts of:\n\n (1) Using many more (an order of magnitude) frames than previous methods\n (2) The presented RND bonus method\n\n and it is impossible to separate these without further extensive experimentation with previous methods. The main claim of the paper is that the RND bonus is a better method for solving hard exploration games; this needs to be shown through a rigorous comparison.\n - The fact that the forward dynamics does worse than vanilla PPO (and the previous results in Ostrovski et al and Bellemare et al) on Montezuma's revenge brings the strength of the used baseline into question\n\n\nOverall, the experimental details are greatly lacking:\n\n - The way that the value function is trained (i.e. the objective function) is never explained in the paper. The value function in PPO is typically (according to the baselines repository) trained at each step to fit (GAE advantage + previous value), but in the paper this is not elaborated on.\n - If this is indeed the case, then the statement that the extrinsic value function fits a stationary distribution on page 5 should be fixed.\n - In Table 4 the $\\lambda$ hyperparameter is listed, but is not described at all in the paper. I am guessing that it is the corresponding GAE hyperparameter, but I am not sure as the GAE method is never written about or cited throughout the paper.\n - The paper is not written in a way that is accessible to people that do not closely follow the line of work on sparse rewards. For example, though it is possible to infer, the paper never explicitly defines the intrinsic reward $i_t$ in the main paper text. The exact mechanism through which the \"forward dynamics\" baseline is never given.\n\n\nTables and figures do not give sufficient detail to know what they are describing:\n\n - Table 5 states that the values given are means, but does not say how many samples each mean was generated from until Table 6. The contents of Table 6 should be in the figure captions; it is important to understand how many samples graphs are generated with\n - Similarly, the way that the shaded regions are calculated should be included up front in the first figure with them in it. At first I believed that the intervals were confidence intervals, but they are actually standard deviations.\n - How are the graph lines calculated? I am not sure, but they look like they have been smoothed out - the captions should indicate this if so. If they are smoothed, are the standard deviations calculated before or after smoothing?\n\nMinor:\n-------------\n - Figure 7 has only 3 random seeds compared. To make comparisons between the RND RNN and CNN policies methods you should use more seeds/samples.\n - On page 2 it is said that previous exploration methods are difficult to scale; a (very short) explanation on why would be appreciated\n - On page 4, it would be good to explain why one would be concerned that episodic rewards can leak information about the task to the agent\n - It would be interesting to plot the RND exploration bonus over time as training iteration progresses; this could give some insight into training dynamics that we cannot see from looking at reward trajectories alone.\n - It would be good to include experimentation around understanding if there is a benefit to using this technique in dense reward tasks.\n", "This paper presents an approach to exploration in RL via random network distillation.\nThe agent generates a random neural network, and adds an \"intrinsic reward\" based on the regression error of this random function.\nThe main evidence for its efficacy comes from evaluation on Atari games, particularly Montezuma's revenge, where it attains state of the art results.\n\nThere are several things to like about this paper:\n\n- The writing is clear and well thought out. \n- The actual algorithm is sensible, simple, intuitive and clearly effective.\n- The results are significant: this is really a \"step change\" compared to previous Montezuma results.\n- This work takes the well-known \"exploration bonus\" approach, combines it with some of the observations of (Osband et al) and simplifies the treatment... so in some ways it's quite standard... but there are several new insights:\n + Focus on normalization schemes for \"randomized prior function\"\n + Bootstrapping \"intrinsic reward\" over episode boundaries\n + Incorporating large-scale policy-based algorithms\n\nTo help improve the paper, I will highlight some potential issues:\n\n- For a paper on exploration, it does not make sense to present results in terms of \"parameter updates\". This should instead be presented in terms of actor/environment steps. This is something that happens consistently across the paper. If you want to show that \"many actors makes it better\" then you can divide this by #actors... so that the curves still functionally look the same. This is an easy thing to change... but I think it's important to do this!\n- Like other \"count-based\" methods, this exploration bonus is not linked to the task. As such, you have to get \"lucky\" that you do the right kind of generalization from the \"random network\". I think that you should mention this issue, potentially in your section 2. That is not to say that this is therefore a bad method, but particularly with reference to (Osband et al 2018) this approach does not address their observation from Section 2.4 of that paper... you don't necessarily get the \"right\" type of generalization from this random network (that has nothing to do with the task). You could then point out that, empirically, using a random convnet seems to do just fine in Atari! ;D\n- The whole section about \"pure exploration\" is somewhat interesting, but you shouldn't assess that performance in terms of \"reward\"... because that is just a peculiarity of these games... we could easily imagine a game where \"pure exploration\" gives a huge negative reward... but that wouldn't mean that it was bad at pure exploration! Therefore, how can you justify the quality of pure exploration by reference to the \"best return\".\n- Although the paper is definitely good, and I've already outlined several truly novel additions from this paper, on another level the actual intellectual contribution of this paper is perhaps not *as* large as it may seem from the Abstract or associated OpenAI publicity/blog posts https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/\n + This paper is about adding an \"exploration bonus\" to RL rewards (this goes back at least to Kearns+Singh 2002)\n + The form of this bonus comes from prediction error on a random function\n + I have some concerns on the process of \"anonymous\" reviews in this \"blog+tweet\" setting\n\nOverall, I like the paper a lot, I think it must be accepted and also it's right at the top of ICLR best papers!\nThe writing is good, the results are good, the algorithm is good and I think it will have impact.\nThe main missing piece is a clear discussion of any of the algorithms potential weaknesses - is this the final solution to exploration? What do you think about the issues of generalization? How would this perform in a linear system? What if the basis functions are not aligned?\nIt's not that the algorithm needs to address all of these things to be a good algorithm, but the paper should try to do a better job about highlighting any potential missing pieces - particularly when the results are so impressive.", "The paper presents a simple but remarkably efficient exploration strategy obtaining state-of-art results in a well known hard-exploration problem (Montezuma's Revenge). The idea consists of several parts:\n1. The authors suggested distilling a fixed randomly initialized network into another randomly initialized trained network in order to use prediction errors as pseudo-rewards. The authors claim that distillation error is a proxy for visit-counts and experimentally demonstrate this idea on MNIST dataset.\n2. The authors suggested using two separate value heads to evaluate expected rewards and expected pseudo-rewards with different time horizons (discount factors) under the same policy.\n\nThe paper is overall well written and easy to read. As far as I can tell, the use of a distillation error as an exploration reward is novel. Relative efficiency of the method compared to its simplicity should interest most people working in RL.\n\nThe main problem which I see is the presentation of learning curves as a function of training steps rather than acting steps. While I acknowledge that the achievement of state-of-art asymptotic performance is valuable on its own, presenting results as a function of acting steps (rather than parameter update steps) may better show data and exploration efficiency. This would also facilitate comparisons with other RL algorithms which may have different architectures (for example, multiple networks updated at different frequencies).\n\nI liked the idea to use two value heads to evaluate intrinsic and extrinsic values with different discounts. Still, as both heads share a common 'trunk' network, they will inevitably affect each other. For example, scaling the pseudo-rewards by 10 and scaling the pseudo-reward value function by 0.1 to produce the same summed value function may lead to a different training dynamics due to the influence of intrinsic value head onto the extrinsic one. Are the results sensitive to this effect? Also, how are the results sensitive to the scale of pseudo-rewards? What would happen if they were simply multiplied or divided by 10?\n\nAlso, what was the distributed training setting that you used to train your agent? Were the actors running on a single machine or on multiple machines? Was a single trainer running on a single machine training network on batched observations or was training distributed in some way? The reason why I am asking this is that as a distillation error fundamentally depends on its training dynamics, I would not be surprised if the results could be affected by the training setting. For example, if the network was trained in a distributed setting, asynchronous updates could introduce implicit momentum and thus may cause a pseudo reward to oscillate. While I do not think that is a fundamental problem with the work either way, it would be nice to know a few more details for future reproducibility.\n\nOther minor comments:\nFigure 2. It would be nice to see if both x and y axes was plotted in log scale in order to visualize any power-law (if one exists) between samples and MSE.\nFigure 3. I would prefer 'x' axis to be in the number of steps.\nFigure 4. Again, performance between different actor-configurations would be easier to see if x axis was a total number of steps, as it would be easier to see if the curves overlap and the method scales linearly with the number of actors.\n\n" ]
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 9, 10 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "r1gvw_EzAm", "SkxVd-4zCm", "rkg65WVMRQ", "iclr_2019_H1lJJnR5Ym", "rkgyTSVGRX", "H1ek-LEfRm", "H1e2w4Ak0X", "Hye5WR-Y6Q", "Bkgy-aa-67", "S1lASKafpX", "rylpqi3tnX", "SkxVd-4zCm", "HkgYozuoaX", "iclr_2019_H1lJJnR5Ym", "Hye5WR-Y6Q", "iclr_2019_H1lJJnR5Ym", "iclr_2019_H1lJJnR5Ym", "iclr_2019_H1lJJnR5Ym" ]
iclr_2019_H1lqZhRcFm
Unsupervised Learning of the Set of Local Maxima
This paper describes a new form of unsupervised learning, whose input is a set of unlabeled points that are assumed to be local maxima of an unknown value function v in an unknown subset of the vector space. Two functions are learned: (i) a set indicator c, which is a binary classifier, and (ii) a comparator function h that given two nearby samples, predicts which sample has the higher value of the unknown function v. Loss terms are used to ensure that all training samples \vx are a local maxima of v, according to h and satisfy c(\vx)=1. Therefore, c and h provide training signals to each other: a point \vx′ in the vicinity of \vx satisfies c(\vx)=−1 or is deemed by h to be lower in value than \vx. We present an algorithm, show an example where it is more efficient to use local maxima as an indicator function than to employ conventional classification, and derive a suitable generalization bound. Our experiments show that the method is able to outperform one-class classification algorithms in the task of anomaly detection and also provide an additional signal that is extracted in a completely unsupervised way.
accepted-poster-papers
The paper proposes a new unsupervised learning scheme via utilizing local maxima as an indicator function. The reviewers and AC note the novelty of this paper and good empirical justifications. Hence, AC decided to recommend acceptance. However, AC thinks the readability of the paper can be improved.
val
[ "BJle8y2LJ4", "SJleu-jZyN", "HygAnf8lyN", "rJl3jlGJ6X", "BJgPwCiuhQ", "B1xnM_vyAX", "ryxp2idFpQ", "S1gaKy9P37", "S1lMMR0W6Q", "BJe4X2ykpm" ]
[ "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author" ]
[ "Thank you very much for pointing us to the NIPS 2018 work by Golan and El-Yaniv, which we will happily include in our next version. \n\nWe completely agree with AnonReviewer2 that the two methods are different in their scope and orthogonal in their contributions and are working on combining both methods. It will take more than a few days, since the implementations of the two methods were written in different frameworks.\n", "Thank you for pointing out this really interesting work.\n\nI am aware of this paper, and don't view it as in any sense - reducing quality of the paper under review, and as a reviewer - I am sticking to the currently assigned rating (8).\n\nWhile it might be interesting to point readers to the NIPS work in this paper, they are completely incomparable contributions. NIPS work is an image specific method, which focuses on data augmentation (to be more precise: enforced predefined geometrical transformation invariance), while paper under review is a generic scheme which happens to be applicable to one-class classification. Both methods seem orthogonal, and would be great to see them combined in some future work.", "Your results for one-class classification in Table 1 for CIFAR-10 are significantly inferior to the state-of-the-art. See the following NIPS paper:\nhttp://papers.nips.cc/paper/8183-deep-anomaly-detection-using-geometric-transformations.pdf\nTable 1 (page 9) in that paper shows outstandingly better results. Moreover, the performance of their algorithm is better for each and every class. Your average AUC for all CIFAR-10 experiments is 69.8, and the best known average AUC in that paper is 86.0.\nGiven these results, and the very high ratings of your paper, it is crucial to include the best known numbers (in the NIPS paper) in your Table 1.\nWe doubt it that the reviewers had given your paper such high ratings had they known about the state-of-the-art.", "In this paper, the authors focus on the task of learning the value function and the constraints in unsupervised case. Different from the conventional classification-based approach, the proposed algorithm uses local maxima as an indicator function. The functions c and h and two corresponding generators are trained in an adversarial way. Besides, the authors analyzed that the proposed algorithm is more efficient than the conventional classification-based approach, and a suitable generalization bound is given. Overall, this work is theoretically complete and experimentally sufficient.\n1.\tThe trained c and h give different predictions in most cases. As a unsupervised method, how to deal with them?\n2.\tIn Table3, why can h achieve better results when adding noise?\n", "This paper describes a new form of one-class/set beloning learning, based on definition of 4 player game:\n- Classifier player (c), which is a typical one-class classifier model\n- Comparator player (h), which given two instances answers if first is \"not smaller\" (wrt. set belonging) than the other\n- Classifier adversary player (Gc), which tries to produce hard to distinguish samples for (c)\n- Comparator adversary player (Gh), which tries to produce hard to classify samples for (h)\nThis way authors end up with cooperative-competitive game, where c and h act cooperatively to solve the problem, while Gc and Gh constantly try to \"beat\" them. \n\nOverall I find this paper to be interesting and worth presenting, however I strongly encourage authors to rethink the way story is presented so that it is more approachable by people who do not have much experience with viewing typical classification problems as games. In particular, one could completely avoid talking about \"sets of local maxima\" and just talk about the density estimation problem, with c being characteristic function (of belonging to the support) and h being comparator of the pdf.\n\nStrong points:\n- Novel, multi-agent in nature, approach to one-class classification\n- Proposed method build a complex system, which can be used in much wider class of problems than just classification (due to joint optimisation of classifier and comparator)\n- Extensive evaluation on 4 problems\n- Nice ablation study showing that most of the benefits come from pure c/Gc game (on average 68.8% acc vs 65.2% of just c, and 69.8% of entire system) but that h/Gh players do indeed still improve (an extra 1%). It might be interesting to investigate what exactly changed in c due to existance of h in training. Are there any identifiable properties of the model that can now be analysed?\n\nWeak points:\nIn general I believe that theoretical analysis is the weakest part of the paper, and while interesting - it is actually a minor point, and shows interesting properties, but not the ones that would guarantee anything in \"practical setup\". I would suggest \"downplaying\" this part of the paper, maybe moving most of it to the appendix. \nTo be more specific:\n- Theorem 1 shows that representation can be more compact, however existance of compactness does not rely imply that this particular solution can ever be learned or that it is a good thing (number of parameters is not correlated with generalisation capabilities of the model).\n- Lemma 1 seems a bit redundant for the story. While it is nice to be able to show generalisation bounds in general, this paper is not really introducing new class of models (since in the end c is going to be used for actual classification), but rather training regime, and generalisations bounds do not tell us anything about the emerging dynamical system. The fact that adding v does not constrain c too much seems quite obvious, and as a result I would suggest moving this section to appendix.\nInstead, if possible, the actual tricky mathematical bit for methods like this would be, in reviewers opinion, any analysis of learning dynamics of the system like this. Multi-agent systems cannot be optimised with independent gradient descent in general (convergence guarantees are lost). Consequently many papers focus on methods that bring these properties back (e.g. Consensus Optimisation or Symplectic Gradient Ascent). It would be beneficial for the reader to spend some time discussing stability of the system proposed, even if only empirically and on small problems.\n\nOther remarks:\n- eq. (1) is missing \\cdot\n- it could be useful to include explicit parameters dependences in (1) and (2) so that one sees how losses really define asymmetric game between the players\n- why do we need 4 players and not just 3, with Gc and Gh being a single player/neural network? can we consider this as another ablation?\n- given small performance gaps in Table 1 can we get error estimates/confidence intervals there? Deep SVDD paper includes error estimates of the baseline methods\n- since training is performed in mini batch (it does not have to be decomposible over samples) shouldn't equations be based on expectations rather than sums?\n\n- ", "Thank you for updating the paper and providing missing information. Wrt. point 2, I am fine with current formulation. I find the empirical results of lack of mode hopping intriguing, and would strongly suggest taking a deeper look into this phenomenon in the future. For the time being, I am increasing the score to 8, as the paper presentation (and results) significantly improved, and I believe it is a really solid work.", "Thank you very much for the supportive and very detailed review.\n\nYou suggest to reposition the paper as a density estimation problem. After much consideration we decided that a more conservative approach, in which we leave the current presentation and add the new viewpoint, would serve us better at this point. Your exciting perspective is now added to the introduction and we already received a positive feedback on it from AnonReviewer3. \n\nFollowing your suggestion, we have moved the theoretical part to the appendix. One small remark -- going forward, and applying the dual model beyond unsupervised learning, we expect h to become more dominant than c. For example, we are exploring an event detection model where the events occur at the local maxima of h, in regions that are defined by c. \n\nReviewer: Multi-agent systems cannot be optimized with independent gradient descent in general (convergence guarantees are lost). Consequently many papers focus on methods that bring these properties back (e.g. Consensus Optimization or Symplectic Gradient Ascent). It would be beneficial for the reader to spend some time discussing stability of the system proposed, even if only empirically and on small problems.\n\n\nAnswer: Following the review, we became familiar with the field of convergence of multi-agent systems. Thank you for pointing us in this direction. Our method could benefit in the future from the increased stability and theoretical guarantees one can obtain with these emerging methods.\n\nAs requested, we tried to evaluate this empirically. We took the example from [1] of a mixture of 16 Gaussians that are placed on a 4x4 grid and applied our method, as well as variations in which we trained only c or only h. Since our method is meant to model local maxima and not entire high-probability regions, we take a standard deviation that is ten times smaller than previous work. These results, which can be found in the latest revised version, indicate that when jointly training c and h, the former captures all the 16 modes, and h is also informative. When training each alone, training results with mode hopping.\n\n[1] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel. The Mechanics of n-Player Differentiable Games. ICML, 2018.\n\nTo the other comments:\n\n1. The \\cdot was added to Eq. 1\n\n2. Trying to add the parameters dependencies in Eq. 1 and 2 resulted in a cumbersome formulation. We therefore choose address the dependencies with added text. Please let us know if you still prefer that we separate the equations.\n\n3. A three player game is explored in two ways in the ablation of Tab.2: (i) In the lines that say “with G_c only” we use only G_c to generate negative points to both c and h and report results for both of these functions, and (ii) Same for “with G_h only”, where G_h was used to generate negative points to both networks. We altered the text to better reflect this.\n\n4. We have added standard deviations to Tab. 1, similarly to the paper from which the baselines were taken. The results reported were already averaged over multiple runs.\n\n5. Expectations rather than sums -- Following the suggestion, we have replaced the sum with averages. Writing the equations as expectations would require the addition of slightly more terminology and we wish to avoid this. Note that while SGD is indeed used, every step of Alg. 1 is over the entire samples of the training set (since the training sets are small). ", "The reviewer feels that the paper is hard to follow. The abstract is confusing enough and raises a number of questions. The paper talks about `\"local maxima\" without defining an optimization problem. What is the optimization problem are we talking about? Is it a maximization problem or minimization problem? If we are dealing with a minimization problem, why do we care about maxima?\n\nThe first several paragraphs did not make the problem of interest clearer. But at least the fourth paragraph starts talking about training networks (the reviewer guesses this \"network\" refers to neural network, not other types network (e.g., Bayesian network) arising in machine learning). This paragraph talks about random initialization for minimizing a loss function, does this mean we are considering a minimization problem's local maxima? In addition, random initialization-based neural network training algorithms like back propagation cannot guarantee giving local maxima or local minima of the problem of interest (which is the loss function for training). It is even not clear if a stationary point can be achieved. So if the method in this paper wishes to work with local maxima of an optimization problem, this may not be a proper example.\n\nThe next paragraph brings out a notion of value function, which is hard to follow what it is. A suggestion is to give a much more concrete example to enlighten the readers.\n\nThe next two paragraphs seem to be very disconnected. It is not properly defined what is x and how to obtain it. If they are local maxima of a problem, please give us an example: what is the optimization problem, and why this is an interesting setup?\n\nSince the problem setup of this paper is very hard to decode, it is also very hard to appreciate why the papers in the \"related work\" section are really related.\n\nThe motivation and intuition behind the formulations in (1) and (2) are hard to follow, perhaps because the goal and objective of the paper is unclear.\n\nOverall, there is no formal problem definition or statement, and the notions and terminologies in this paper are not properly defined or introduced. This makes evaluating this work very hard.\n\n\n========= after author feedback =======\nAfter discussing with the authors through OpenReview, the reviewer feels that a lot of things have been clarified. The paper is interesting in its setting, and seems to be useful in different applications. The clarity can still be improved, but this might be more of a style matter. The analysis part is a bit heavy and overwhelming and not very insightful at this moment. Overall, the reviewer appreciate the effort for improving the readability of the paper and would like to change the recommendation to ```` accept.", "Thank you very much for your comments. \n\nIt is true that c and h are trained concurrently and that the training algorithm, presented as Algorithm 1, is almost symmetric between the two. However, the two networks differ for multiple reasons: (i) The structure of the two functions is different: c has one input, and h has two, and (ii) The loss is different: G_h, which is the network that generates negative points for h, generates points G_h(x) that are in the vicinity of point x.\n\nThese two differences are enough to ensure that h and c take different roles: c is, what AnonReviewer2 calls a characteristic function (does x belong to the set), and h is a comparator of nearby points.\n\nWhen there are multiple aspects that define the given set of input points, e.g., class membership and quality, c and h would assume the role that fits their structure, and not a random role. \n\nIn addition, due to their loss, h and c strive to become anti-correlated, which further pushes them to take different roles. As mentioned, these roles are not arbitrary but depend on the structure of the two functions\n\nIn the revision we uploaded earlier today, we put an additional emphasis on this asymmetry.\n\nTo your questions #1:\n\nWe use either c or h based on our goal. If, for the image experiments, our goal is to detect out-of-class samples, we use c. If our goal is to detect low quality images, we use h. In the cancer dataset experiment, h is more suitable for predicting the continuous value of survival we are interested with. A hypothetical scenario in which h and c play a different role in drug discovery is mentioned, for illustration, at the end of the discussion section.\n\nTo your questions #2: \n\nThe results in Tab. 3 are reported for multiple experiments, which are given side by side for brevity. In the columns of the experiment “(i) class membership” we evaluate the typical one-class classification scenario, for which c is suitable. \n\nIn the other two scenarios, we test images from the training class vs. noisy images. In the experiment “(ii) Noise in-class” we evaluate the ability of each learned method to discriminate between images that are similar to those in the training set and images that are noisy versions of it. In this task, which is based on image quality, h, as a comparator, is more suitable. \n\nTo see why this is the case, consider the training of h, during which points x are compared with generated points x’ in the vicinity of x. Since the training points x are obtained from a set of real-world training images, they are likely to be of higher quality than the generated nearby points.\n", "It is always the authors’ responsibility that the readers understand their work. To maximize the probability that our work would be well understood, we have collected feedback from quite a few readers. Yet it seems that there is still room for improvement. \n\nWhile taking responsibility for this, we respectfully disagree with the claim of the reviewer that “there is no formal problem definition or statement, and the notions and terminologies in this paper are not properly defined or introduced.“ As can be seen, we define the problem we study multiple times: (i) it is defined clearly in the abstract (input, goal, which functions are learned, why, and how). (ii) it is defined again at the end of the introduction in the first three paragraphs of page 2. (iii) it is redefined again at the beginning of Sec. 3, since we were worried that some readers would skip the abstract and the introduction.\n\nParagraphs 1-4 motivate our methods, by showing sample sets that arise in biology, man-made constructs, and weights of neural networks. The underlying value function in each case is explained: fitness or energetic efficiency in biology, an implicit value function in architecture (we mention a few possible factors), and an engineered loss in machine learning. \n\nIt seems that the reviewer was confused by the last example since it discusses machine learning. However, the paragraph merely describes a process that generates unsupervised samples that are the result of a local optimization process. The implication is that similarly to the other examples, viewing the learned weights of each random initialization as points in a vector space, this set of vectors is a suitable input to our method. \n\nIt is emphasized in the abstract and then in the introduction that the value function is learned and that the local maxima are of that unlearned function. The paper starts with “[the] input is a set of unlabeled points that are assumed to be *local maxima of an unknown value function* in an unknown subset of the vector space”. \n\nThe reviewer states that we discuss local maxima without stating the optimization problem. However, the local maxima we consider are of a function we seek to learn, not of an optimization problem. The notion of local maxima is discussed in the abstract, as it is actually applied: we learn a function h that compares the value of two points and a local maxima is a point x such that every point x’ in the vicinity of x satisfies c(x’) = -1 or is deemed by h to be lower in value than x.\n\nThe notion of local maxima is also clearly defined in the intro: “In addition, we also consider a value function v, and for every point x', such that ||x' −x|| < eps, for a sufficiently small eps > 0, we have: v(x0) < v(x)”. In practice, as mentioned early on in Sec. 3, and is well motivated by the ambiguity of v, we learn a comparator function h and not v.\n\nThe reviewer says that “It is not properly defined what is x and how to obtain it”. The points x are the training samples and the definition of x is also given multiple times: \n(1) The abstract says “all training samples x”. \n(2) The introduction says that the points x are in the set S, which is defined as “Let S be the set of such samples from a space X”. The word “such” clearly refers in this context to unlabeled training samples. \n(3) This is repeated one paragraph below, at the beginning of Sec. 2, “The input to our method is a set of unlabeled points.” \n(4) As mentioned, we redefine x and the other concepts as soon as Sec. 3 starts, to make sure that all readers are aware of the setting. “Recall that S is the set of unlabeled training samples, and that we seek two functions c and v such that for all x \\in S it holds that: (i) c(x) = 1, and (ii) x is a local maxima of v.” By “seek” we mean learn, but since it is not the first time this is stated in the paper (even the previous paragraph mentions that the value function is learned), we used a different word.\n\nThe reviewer says that “The motivation and intuition behind the formulations in (1) and (2) are hard to follow, perhaps because the goal and objective of the paper is unclear. “. However, the terms of both equations are discussed one by one below them. These explanations are directly tied to the goals and objectives that appear earlier in the paper:\n(1) In the abstract: “Loss terms are used to ensure that all training samples x are a local maxima, according to h and satisfy c(x) = 1. Therefore, c and h provide training signals to each other: a point x’ in the vicinity of x satisfies c(x’) = −1 or is deemed by h to be lower in value than x. “\n(2) In the intro: “This structure leads to a co-training of v and c, such that every point x’ in the vicinity of x can be used either to apply the constraint v(x’) < v(x) on v, or as a negative training sample for c. Which constraint to apply, depends on the other function: if c(x’) = 1, then the first constraint applies; if v(x’) >= v(x), then x’ is a negative sample for c”.\n" ]
[ -1, -1, -1, 8, 8, -1, -1, 8, -1, -1 ]
[ -1, -1, -1, 3, 4, -1, -1, 3, -1, -1 ]
[ "HygAnf8lyN", "HygAnf8lyN", "iclr_2019_H1lqZhRcFm", "iclr_2019_H1lqZhRcFm", "iclr_2019_H1lqZhRcFm", "ryxp2idFpQ", "BJgPwCiuhQ", "iclr_2019_H1lqZhRcFm", "rJl3jlGJ6X", "S1gaKy9P37" ]
iclr_2019_H1x-x309tm
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the ''``Adam-type'', includes the popular algorithms such as Adam, AMSGrad, AdaGrad. Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log⁡T/T) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.
accepted-poster-papers
This paper analysis the convergence properties of a family of 'Adam-Type' optimization algorithms, such as Adam, Amsgrad and AdaGrad, in the non-convex setting. The paper provides of the first comprehensive analyses of such algorithms in the non-convex setting. In addition, the results can help practitioners with monitoring convergence in experiments. Since Adam is a widely used method, the results have a potentially large impact. The reviewers agree that the paper is well-written, provides interesting new insights, and that is results are of sufficient interest to the ICLR community to be worthy of publication.
train
[ "BJecL9ugTX", "r1lS9eA6AQ", "rJgatHaaCm", "HJlleRKc07", "HkeyuhPLAQ", "SyeAXhDIAm", "S1xgznw8RX", "rygs0jvUAX", "HJlR3oD8RX", "B1esVdlCh7", "SkgumbaYn7" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper presents a convergence analysis in the non-convex setting for a family of optimization algorithms, which the authors call the \"Adam-type\". This family incorporates popular existing methods like Adam, AdaGrad and AMSGrad. The analysis relies only on standard assumptions like Lipschitz smoothness and bounded gradients.\n\nIndividual Comments/Questions:\n\n- In Table 1, the characterization of Adam ignores the fact that, in practice, Adam adds a positive epsilon to the $\\hat{v}_t$ in the denominator. I would like the authors to at least comment on that in the paper. I assume that AdaFom and AMSGrad also need such an epsilon in practice. Could the author comment on whether (and how) this would affect their analysis of those methods? In particular, in Theorem 3.1, can we really assume the term $\\Vert \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\Vert$ to be bounded by a constant without such an epsilon?\n\n- In the first bullet point in Section 3.1, the authors relate the term\n\n$$ \\sum_t \\Vert \\alpha_t g_t / \\sqrt{\\hat{v}_t} \\Vert^2 $$ (*)\n\nto the term $\\sum_t \\alpha_t^2$ in the analysis of SGD. I don't think this is a fair analogy. The effective step size of Adam-type methods is $\\alpha_t / \\sqrt{\\hat{v}_t}$, while the aformentioned term also contains the magnitude of the stochastic gradient $g_t$. So, while the SGD analysis only poses a condition on the step sizes, bounding (1) also poses a condition on the magnitude of the stochastic gradient.\n\n- In the experiments of Section 3.2.1, the authors use a step size of 0.01 for SGD (which really is gradient descent, since this is a non-stochastic problem). Existing theory tells us that GD only converges for step sizes smaller than 2/L where L is the Lipschitz constant of the gradient, which is L=200 in this example. So this is literally setting the method up for failure and I don't really see any merit in that experiment.\n\n- The experiments in Section 4 are of course very limited, but this paper makes a significant theoretical contribution, so I don't really see the need for extensive experiments.\n\n- To my knowledge, under similar assumptions, plain SGD has been show to converge at a rate of O(1/sqrt(T)). The convergence analysis presented here has an additional log(T) factor, so it is not really suitable to explain any possible benefits of these adaptive methods over SGD. This is totally fine in and of itself; after all the analysis of theses methods is hard and this is a great first step. The issue I have is that this is not mentioned in the paper at all. \n\nOriginality:\n\nTo the best of my knowledge, the convergence analysis of the Adam-type methods (including established methods AdaGrad, RMSprop, Adam, AMSGrad) in the _non-convex_ setting is a novel, original contribution. The authors also propose a new algorithm, AdaFom. This exact algorithm is proposed in [1], which was uploaded to arXiv before the ICLR deadline. However, this can be considered concurrent work.\n\nSignificance:\n\nThe convergence properties of popular optimization methods in machine learning (e.g., Adam) are generally very poorly understood in \"realistic\" settings. The analysis presented in this paper is an important step to better theoretical understanding of these methods which, in my opinion, is highly significant.\n\nCorrectness:\n\nThis was a short-notice emergency review and I did not check any of the proofs in the appendix. I will try to verify at least parts of the proofs in the coming days.\n\nConclusion:\n\nThis is an original paper making a significant theoretical contribution. I can't comment on the correctness of the mathematical analysis (yet). I'm cautiously recommending acceptance for now, but would be willing to upgrade my rating if the authors respond to my comments/questions.\n\n\n[1] Zou and Shen. On the Convergence of Weighted AdaGrad with Momentum for Training Deep Neural Networks. https://arxiv.org/abs/1808.03408.\n\n--------------------------------\nUpdate\n--------------------------------\n\nThe authors have provided a detailed response to my concerns and have fixed many of them in their revised version. I verified parts of the proofs in the appendix (Theorem 3.1 and its Corollaries). I congratulate the authors on their work and recommend acceptance!", "Without having followed all the details in the above response, the outlined analysis including the $\\epsilon$ sounds very interesting. I would encourage the authors to include this in a future version of the paper.\n\nIn the meantime, I verified some of the proofs in the appendix, specifically those of Theorem 3.1 as well as the Corollaries.\n\nOverall, I think this is a good paper. I will update my review and increase my rating shortly.", "Assumption that no gradient coordinate is 0 at initial point:\n\nWe will explicitly state this assumption in the next version. Specifically, we will add “Assume that $\\|(g_{1})_i| \\geq c, \\forall i$” in the Corollary 3.1 and 3.2. \n\n\nAnalysis after adding $\\epsilon$:\n\nYour comment on $\\epsilon$ is very helpful! We will add more discussion on effect of $\\epsilon$ in our next version (or a future arxiv version if adding a few corollaries will be considered too much). \n\nAfter going through the proof with $\\epsilon$ added, we find that adding a proper $\\epsilon$ can indeed help with worst-case convergence rate in our analysis. Meanwhile, with $\\epsilon$ added, we do not need to assume that no gradient coordinate is zero at the first iteration.\n\nSince we are not able to modify the paper at this stage, we go through what will happen in the proof of Corollary 3.1 here (it is just a simple modification of the original proof). \n\nWe consider adding $\\epsilon$ as replacing S4 in Algorithm 3 (AMSGrad) to $x_{t+1} = x_t - \\alpha_t m_t / (\\sqrt{\\hat{v}_t} + \\epsilon)$. This resulting algorithm is also a special case of Algorithm 1 because $(\\sqrt{\\hat{v}_t} + \\epsilon)$ is still a function of all past gradient (Theorem 3.1 still holds). Then we can use (5) to analyze the convergence of the new Algorithm 3. Briefly speaking, the analysis requires only a few modifications in the proof of Corollary 3.1 ($(\\hat{v_1})_j \\geq c$ replaced by ($(\\hat{v_1})_j + \\epsilon \\geq \\epsilon$ and $(\\hat{v_t})_j \\leq H$ replaced by $(\\hat{v_t})_j + \\epsilon \\leq H + \\epsilon)$).\n\nNow we provide a detailed discussion on modifications in the proof of Corollary 3.1 if $\\epsilon$ is added. Our following discussion is based on Section 6.2.3 in the paper. As the reviewer may already noticed, our proof of Corollary 3.1 relies on upper bounding Term A and Term B in (5) of Theorem 3.1 and finding G in the assumption $\\| \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\| \\leq G$. With $\\epsilon$ added in the denominator, we can easily have $\\| \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\| \\leq H/\\epsilon$ because we have $\\|m_t\\| \\leq H$ (due to $\\|g_t\\| \\leq H$).\n\nWe now discussed how $\\epsilon$ affects the rest of the proof. \n\nAbove the first unnumbered inequality in page 27, we have assumed that $(\\hat{v}_1)_j \\geq c > 0$. With $\\epsilon$ added, we have $(\\hat{v}_1)_j + \\epsilon \\geq \\epsilon$. Thus we can replace $c$ in the remaining proof to $\\epsilon$ and allow $(\\hat{v}_1)_j = 0$ (which removes the assumption that the first gradient is non-zero at every coordinate). \n\nThe rest of the proof will easily go through by replacing $\\hat{v}_t$ to $\\hat{v}_t+ \\epsilon$ until (41). ( $\\hat{v}_t+ \\epsilon$ is still monotonically increasing since $\\hat{v}_t$ is, this enables the telescoping sum in (40)).\n\nNotice that we have used $(\\sqrt{\\hat{v}_t})_j \\leq H$ below (41), with $\\epsilon$ added, we replace it by $(\\sqrt{\\hat{v}_t})_j + \\epsilon \\leq H + \\epsilon$. Then the $1/H$ will be replaced to $1/(H+\\epsilon)$ from the unnumbered inequality below (41) to the inequality below (42). Then we can substitute $G=H/\\epsilon$ and expression of $C_1, C_2, C_3, C_4$ (at the end of Section 6.2.2) into the final bound at the end of the proof.\n\nIn the end, the RHS of the last inequality before the end of the proof has the form $A/\\epsilon^2 + B/\\epsilon + C \\epsilon + D$ with $A,B,C,D$ being numbers independent of $\\epsilon$. Thus a proper $\\epsilon$ can be chosen to minimize the upper bound.\n\nIt is easy to see that the optimal $\\epsilon$ is neither $\\infty$ nor 0 but something in between. Intuitively, this is because when $\\epsilon$ is very large, the effective stepsizes will be very small and the algorithm will not make fast progress. When $\\epsilon$ is very small, the algorithm may move unpredictably due to very large effective stepsizes at the early stage of optimization. Again, the theoretical bound agrees with what people observe in practice.\n\nThe analysis of AdaFom at the presence of $\\epsilon$ will be similar (even simpler than the current proof of Corollary 3.2 because it is easier to bound Term A in (5)).\nThe main reason for ignoring $\\epsilon$ in our original version is its simplicity in convergence analysis. However, based on the discussion with the reviewer, we think it is very interesting to see how $\\epsilon$ affects the performance of AMSGrad and AdaFom. Again, we will add more discussion on adding $\\epsilon$ in our next version . We sincerely hope that our response have addressed the reviewer’s concern. \n", "I thank the authors for their detailed reply.\n\nThe revised version contains clarifications and explanatory comments on the \"cumulative step sizes\" (and how that quantity relates to SGD) as well as the additional log(T) factor in the convergence rate compared to SGD. I think this greatly improves the clarity of the paper.\n\nRegarding the epsilon offset for some of the adaptive methods: I had missed the relevant parts of the proofs of Corollary 3.1 and 3.2; thanks for pointing them out. However, the fact that this does not need an epsilon relies on a hidden assumption that the initialization point is such that no gradient coordinate is zero. I would ask the authors to clearly state that assumption in the text of the Corollaries.\n\nAlso, I stand by my point that the analysis would be much more interesting if it would incorporate any possible effects of the epsilon since, to my knowledge, this is an practically important aspect of these Adam-style methods. (Alternatively, the authors might want to explain why they think that such an epsilon would not considerably affect the theoretical results.)", "We thank all reviewers for the positive comments on our paper. Regarding the reviewer’s specific questions, we have carefully addressed each of them, and further revised our paper and presentation. We have also enriched our experiments by including the example of training CIFARNET model on CIFAR-10 dataset. ", "Thank you for your positive review. In the experiment in section 3.2.2, we consider a convex function f = \\sum f_i, where f_1 is convex but f_i are nonconvex for i > 1. However, it is easy to extend the above function f to the nonconvex setting if needed. Because the divergence results are essentially caused by function properties around neighborhoods of local minima, we can change the shape of the convex function outside a small region around local minimum to make it non-convex (we will get similar experiment result and our analysis will still work). But we tried to make our examples as simple as possible so that the reason of divergence can be easily understood.\n\nThe purpose of the MNIST experiment is just to confirm the performances of different algorithms. To test our algorithms in a larger problem setting, we have added experiments for training CIFARNET on CIFAR-10 in Section 4.\n", "Thank you for your positive comments. Aside from our theoretical analysis, you may find our experiments and discussions in Section 6.1.1 interesting. We compared the performance of Adam, AMSGrad, SGD on a quadratic problem and our finding is that adaptive methods are more robust to choices of stepsizes. This can be beneficial when the structure of the optimization problem is unknown and the best stepsizes are difficult to obtain.", "Algorithm in [1]: \nThank you for for emphasizing this highly related work. Thanks to explanation provided in new version of [1], we found AdaFom is slightly different from the algorithm (AdaHB) in [1]. As mentioned in Section 4 of [1], $m_t$ in AdaFom is an exponential moving average of $g_t$ while $m_t$ in AdaHB in [1] is a exponential moving average of $\\alpha_t g_t /\\hat{v}_t$.\n\nAgain, thank you for your positive feedback and timely review. ", "Thank you for your valuable feedback. We respond your comments and questions point by point.\n\nAdding $\\epsilon$: \nFor Generalized Adam (Algorithm 1), adding $\\epsilon$ to $\\hat{v}_t$ does not affect our convergence analysis. Our Theorem 3.1 still holds since in Algorithm 1, $\\hat{v}_t$ takes a very general form that can cover the cases where $\\hat{v}_t$ is lower bounded by manually adding an $\\epsilon$.\nThen the real question is for specific algorithms, can the term $\\Vert \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\Vert$ in Theorem 3.1 be upper bounded by a constant? \n\nFirst, the reviewer is correct that adding epsilon helps in this regard because it lower-bounds $\\hat{v}_t$ by epsilon and $\\Vert \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\Vert$ can be easily upper bounded. In this case,our theory still holds. We have clarified this point after Theorem 3.1.\n\nHowever, for specific algorithms AdaFom and AMSGrad, we actually does not need to add $\\epsilon$ to ensure convergence. In particular, by assuming $\\|g_t\\|$ is upper bounded as in Assumption A2, the upper boundedness of $\\Vert \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\Vert$ is automatically satisfied. The upper boundedness is verified at the end of Section 6.2.3 and Section 6.2.4. We added a comment after Theorem 3.1 that the upper boundedness of the term is automatically satisfied by AdaFom and AMSGrad according to your suggestion. \n\nAnalogy to SGD:\nYou are right. Rigorously, $\\sum_{t} \\| \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\|^2$ should be an generalization of $\\sum_{t} \\| \\alpha_t g_t \\|^2$ in SGD. We made the analogy because under the assumption that $\\|g_t\\| \\leq G$, $\\sum_{t} \\| \\alpha_t m_t / \\sqrt{\\hat{v}_t} \\|$ is upper bounded by $G^2 \\sum_{t} \\| \\alpha_t / \\sqrt{\\hat{v}_t} \\|^2 $. The latter transfers to $\\sum_{t}\\alpha_t^2 $ when ignoring other constants (G and dimension d). We have changed $\\sum_{t}\\alpha_t^2$ to $\\sum_{t} \\| \\alpha_t g_t \\|^2$ in the paper to avoid confusion.\n\n\nExperiments in Section 3.2.1: \nThe purpose of the experiment in Section 3.2.1 is to show that Adam-type algorithm may diverge when term A in (5) grows too fast. Both the divergence of Adam and SGD serve as support to the aforementioned claim. For SGD, we intentionally set up an experiment to make it diverge and observe whether the growth of term A in (5) agrees with our theory. This is only to verify our theory and the purpose is not to provide new insights about SGD (since SGD is well-studied ). \n\nOn the contrary, the divergence of Adam in the experiment is the interesting part and it provides some new insights. The experiment says that even for batch versions, Adam is not guaranteed to converge to a stationary point using a small constant stepsize (this is different from batch GD which is guaranteed to converge using a stepsize smaller than 2/L). This is a new discovery in our experiment. The message we want to convey is the following: the convergence requirement of limiting the growth rate of term A to be slower than that of accumulation of effective stepsizes in (5) is not an artifact, or being too restrictive; because SGD and Adam can diverge even when term A grows with the same speed as the accumulation of effective stepsizes.\n \nExperiments in Section 4: \nThank you for your support. To make our experiments more convincing, we have added experiments on larger problems (training CIFARNET on CIFAR-10) in the revised paper (Section 4).\n\n\nPossible benefits: \nThank you for pointing out this possible confusion. In our new version, we explicitly point out that the possible benefits of adaptive methods may not lie in the worst case convergence rates, but can be explained by other factors such as robustness to choices of stepsizes (the last paragraph in Section 3.3).\n\nAs mentioned, we are not explaining the benefits by claiming faster worst-case convergence rate. The main benefit we are showing is the robustness to the choice of stepsizes compared to SGD. You may find experiments in Section 6.1.1 interesting where we compare performance of SGD and adaptive methods under different choices of stepsizes. The message obtained from the experiments is that, adaptive methods may converge or converge faster for a larger range of stepsizes compared to SGD. Therefore adaptive methods may perform much better on average if we choose stepsizes randomly (which is usually the case in practice since it is time consuming to test all possible values of stepsizes). In the future, we will try to show the possible benefits of certain adaptive methods in a more rigorous way .\n\nWe have also commented on the presence of the additional log term in convergence rates at the end of Section 3 in the revised manuscript.\n\n", "The work studies the convergence properties of a \"Adam-type\" class of optimization algorithms used for neural network. \nThe “Adam-type” class includes the popular algorithms such as Adam, AMSGrad and AdaGrad. Mathematical analysis is conducted to study the convergence of those algorithms in the non-convex setting. The authors derive theorems that guarantee the convergence of Adam-type algorithms under certain conditions to first-order stationary solutions of the non-convex problem, with O(log T /√ T) convergence rate. These conditions for convergence presented in this work is are “tight”, in the sense that violating them can make an algorithm diverge. In addition, these conditions can also be checked in practice to monitor empirical convergence, which gives a positive practical aspect to this work. The authors propose a correction to the Adam algorithm to prevent an option of divergence, and propose a new algorithm called AdaFom accordingly. \nOverall this seems like a high-quality work with interesting contribution to the research community. This reviewer is not an expert in theoretical analysis of optimization algorithms, therefore it is hard to assess the true contribution of this work and its comparison to other works in this field. ", "The main theory points out two scenarios causing Adam-type optimizers to diverge, which extends Reddi et al's results. \n\nThe theorem in this paper applies to all Adam-type algorithms, which combine momentum with adaptive learning rates and thus are more general as compared to the recent papers, such as Zhou et al's. The relationship between optimizers' effective step size, step size oscillation and convergence is well demonstrated and is interesting.\n\nRemarks:\n1. The main theorem and proof are based on the non-convex settings while the examples to demonstrate the convergence condition are simple convex functions.\n\n2. The message delivered by MNIST experiment is limited, is not clear and is not very relevant to the main part of the paper. It would be better to compare these algorithms in larger deep learning tasks.\n\nTypo:\nPage 5, section 3.1: Term A is a generalization of term alpha^2 g^2 (instead of just alpha^2) for SGD." ]
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2019_H1x-x309tm", "rJgatHaaCm", "HJlleRKc07", "rygs0jvUAX", "iclr_2019_H1x-x309tm", "SkgumbaYn7", "B1esVdlCh7", "HJlR3oD8RX", "BJecL9ugTX", "iclr_2019_H1x-x309tm", "iclr_2019_H1x-x309tm" ]
iclr_2019_H1xD9sR5Fm
Minimum Divergence vs. Maximum Margin: an Empirical Comparison on Seq2Seq Models
Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction. While traditional seq2seq models are trained by Maximum Likelihood Estimation (MLE), much recent work has made various attempts to optimize evaluation scores directly to solve the mismatch between training and evaluation, since model predictions are usually evaluated by a task specific evaluation metric like BLEU or ROUGE scores instead of perplexity. This paper puts this existing work into two categories, a) minimum divergence, and b) maximum margin. We introduce a new training criterion based on the analysis of existing work, and empirically compare models in the two categories. Our experimental results show that our new training criterion can usually work better than existing methods, on both the tasks of machine translation and sentence summarization.
accepted-poster-papers
The reviewers agree that the paper is worthy of publication at ICLR, hence I recommend accept. Regarding section 4.3 of the submission and the claim that this paper presents the first insight for existing work from a divergence minimization perspective, as pointed out by R2, I went and checked the details of RAML and they have similar insights in their equations (5) and (8). Please make this clearer in the paper. Regarding evaluation using greedy search instead of beam search, please consider using beam search for reporting test performance as this is the standard setup in sequence prediction. Please take my comments and the reviews into account an prepare the final version.
train
[ "SJevAu1qhX", "rJxph-Uo3X", "ByeC4dDphX", "BkgNtCr_0X", "BJxp2tSERQ", "Hylqq_B40Q", "rkegpIsbA7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "In this paper the authors distinguish between two families of training objectives for seq2seq models, namely, divergence minimization objectives and max-margin objectives. They primarily focus on the divergence minimization family, and show that the MRT and RAML objectives can be related to minimizing the KL divergence between the model's distribution over outputs and the \"exponentiated payoff distribution,\" with the two objectives differing in terms of the direction of the KL. In addition, the authors propose an objective using the Hellinger distance rather than the KL divergence, and they conduct experiments on machine translation and summarization comparing all the considered objectives.\n\nThe paper is written extremely clearly, and is a pleasure to read. While the discussion of the relationship between RAML and MRT (and MRT and REINFORCE) is interesting and illuminating, many of these insights appear to have been discussed in earlier papers, and the RAML paper itself notes that it differs from REINFORCE style training in terms of the KL direction.\n\nOn the other hand, the idea of minimizing Hellinger distance is I believe novel (though related to the alpha-divergence work cited by the authors in the related work section), and it's nice that training with this loss improves over the other losses. Since the authors' results, however, appear to be somewhat below the state of the art, I think the main question left open by the experimental section is whether training with the Hellinger loss would further improve state of the art models. Even if it would not, it would still be interesting to understand why, and so I think the paper could be strengthened either by outperforming state of the art results or, perhaps through an ablation analysis, showing what aspects of current state of the art models make minimizing the Hellinger loss unnecessary.\n\nIn summary,\n\nPros:\n- well written and interesting\n- a new loss with potential for improvement over other losses\n- fairly thorough experiments\n\nCons:\n- much of the analysis is not new\n- unclear if the proposed loss will improve the state of the art, and if not why \n\nUpdate after author response: thanks for your response. I think the latest revision of the paper is improved, and even though state of the art BLEU scores on IWSLT appear to be in the mid 33s, I think the improvement over the Convolutional Seq2seq model is encouraging, and so I'm increasing my score to 7. I hope you'll include these newer results in the paper.", "The authors have updated the paper and clarified some things, and now my impression of the paper has improved. It still feels a little incremental to me, but the potential application areas of these sorts of models are quite large and therefore incremental improvements are not insignificant. This paper suggests some natural follow-up work in exploring Hellinger distance and other variations for these models.\n\n----- original review follows: ------\n\nThis paper discusses loss functions for sequence generation tasks that take into account cost functions that reflect the task-specific evaluation metric. They compare RAML and risk (MRT) formally and empirically, and also test a loss based on Hellinger distance. They compare these to some standard max-margin losses. MRT and the Hellinger distance loss perform best in NMT and summarization experiments. \n\nPros:\n\nThere are some interesting aspects of this paper:\n\n- It is interesting to note that RAML and MRT are (similar to) different directions of KL divergences between the same two distributions. (Caveat: the entropy regularizer, which I discuss in \"Cons\" below.)\n- The new Hellinger-distance-based loss seems promising. \n- The empirical comparison among losses for standard NMT/summarization tasks is a potentially valuable contribution.\n\nCons:\n\nA.\nThe focus/story of the paper need some work. It is unclear what the key contributions are. I think the Hellinger distance loss is potentially the most important contribution, but the authors don't spend much time on that.. it seems that they think the comparison of divergence and max-margin losses is more central. However, I think the authors' conclusion (that the divergence losses are better than the max-margin losses) is not the main story, because RAML is not much better than the max-margin losses. Also, I have some concerns about some of the details of the max-margin losses (listed and discussed below), so I'm not sure how reliable the empirical comparison is. \n\nB.\nAs for the connections and comparison between RAML and MRT: \n\nIt does not seem that MRT corresponds to a divergence of the form given at the start of Sec. 4. There is also an entropy regularizer in Eq. (9). Sec. 4.3 states: \"By comparing the above two methods, we find that both RAML and MRT are minimizing the KL divergence between the model output distribution and the exponentiated payoff distribution, but with different directions of D_KL.\" However, this statement ignores the entropy regularizer in Eq. (9). \n\nMaybe I'm being dense, but I didn't understand where Equation (10) comes from. I understand the equations above it for RAML, but I don't understand the MRT case in Eq. (10). Can you provide more details?\n\nI also don't understand the following sentence: \"It turns out that the hyperparameter \\tau in RAML and \\alpha in MRT have the same effect.\" What does this mean mathematically? Also, does this equivalence also require ignoring the entropy regularizer? As formulated, L_{RAML} necessarily contains a \\tau, but L_{MRT} does not necessarily contain an alpha. It is only when moving to the sample approximation does the alpha become introduced. (MRT does not require this sample approximation; Some older work on MRT developed dynamic programming algorithms to exactly compute the gradients for structured output spaces like sequences, so samples were not used in those cases.) So I think the paper needs to clarify what exactly is meant by the connection between tau and alpha, under what conditions there is a connection between the two, and what exactly is the nature of this connection. If more space is needed for this, many of the details in Sec 3 can be cut or moved to appendices because those are standard and not needed for what follows. \n\nIn the experimental results (Sec. 7), MRT outperforms RAML consistently. The authors discuss the impact of the directionality of the KL divergence, but what about the entropy regularizer? It would be interesting to compare MRT both with and without the entropy regularizer. Without the regularizer, MRT would actually correspond to the KL that the authors are describing in the discussion. As it currently stands, two things are changing between MRT and RAML and we don't know which is responsible for the sizable performance gains. \n\nC. \nThere are several technical issues with the writing (and potentially with the claims/conclusions), most of which are potentially flexible with some corrections and more exposition by the authors:\n\nIs L_{RAML} to be maximized or minimized? Looks like maximized, but clearly both L_{MLE} and L_{MRT} are supposed to be minimized, so the use of L for all of these seems confusing. If different conventions are to be used for each one, it should be explicitly mentioned in each case whether the term is to be maximized or minimized.\n\nAt the end of Sec. 4.1, q' is not defined. I can guess what it is, but it is not entirely clear from the context and should be defined. \n\nIn Equation 6, please use different notation for the y in the denominator (e.g., y') to avoid collision with the y in the numerator and that on the left-hand side. \n\nThe discussion of max-margin losses in Sec. 5 has some things that should be fixed. \n\n1. In Sec. 5, it is unclear why \\Delta is defined to be the difference of two r() functions. Why not just make it -r(y, y^*)? Are there some implied conditions on \\Delta that are not stated explicitly? If \\Delta is assumed to be nonnegative, that should be stated. \n\n2. In Eq. (11), F appears to be a function of y and theta, but in the definition of F, it has no functional arguments. But then further down, F appears to be a function of y only. Please make these consistent.\n\n3. Eq. (11) is not only hard for structured settings; it is also hard in the simplest settings (binary classification with 0-1 loss). This is the motivation for surrogate loss functions in empirical risk minimization for classification settings. The discussion in the paper makes it sound as if these challenges only arise in the structured prediction setting. \n\n4. I'm confused by what the paper is attempting to communicate with Equations 12 and 13. In Eq. 12, y on the left-hand side is not bound to anything, so it is unclear what is being stated exactly. Is it for all y? For any y? In Eq. 13, the \\Delta on the right-hand side is outside the max over y -- is that really what was intended? I thought the max (the slack-rescaled loss-augmented inference step) should take into account \\Delta. Otherwise, it is just doing an argmax over the score function. \n\n5. If the authors are directly optimizing the right-hand side of the inequality in Equation 12 (as would be suggested for the formula for the gradient), then there is no global minimum of the loss. It would go to negative infinity. Typically people use a \"max(0, )\" outside the loss so that the global minimum is 0. \n\n\n\nTypos and minor issues follow:\n\nSec. 1:\n\"the SEARN\" --> \"SEARN\"\n\"the DAGGER\" --> \"DAGGER\"\n\nSec. 2:\n\"genrally\" --> \"generally\"\n\"took evaluation metric into training\" --> \"incorporated the evaluation metric into training\"\n\"consistant\" --> \"consistent\"\n\nSec. 3:\nUse \\exp instead of exp.\n\"a detail explanation\" --> \"a detailed explanation\"\n\nSec. 4.1:\n\"predition\" --> \"prediction\"\n\nSec. 4.2:\n\"only single sample\" --> \"only a single sample\"\n\nSec. 6.1:\n\"less significant than\" --> \"less significantly than\"\n", "This paper compares several well known methods and illustrates some connections among these methods, and proposes one new method based on Hellinger distance for seq2seq models' training, experimental results on machine translation and sentence summarization show that the method based on Hellinger distance gives the best results. \n\nHowever the originality and significance of this work are weak.", "We’ve uploaded a new manuscript to address the following problems:\n1) We modified the abstract and introduction to highlight our main contributions.\n2) We analyzed the similarity and difference between RAML and MRT using cross entropy now, which is more precise, since it doesn’t need the entropy regularizer.\n3) We changed the writing in max margin section and corrected a lot of math problems.\n4) Other changes included correcting typos, changing wrong notations, etc. And both $L$ need to be minimized now.", "Thanks for your comments. \nAbout our contributions, we want to make the following clarification,\n\n1. This paper is more than simply comparing several training criteria in seq2seq models. Actually, from an intuitive observation and careful consideration, for the first time we categorize existing work into two categories only in terms of the loss function mathematics, and discover useful connections among those criteria (RAML and MRT are just different directions of the KL divergence/cross entropy, and both minimum divergence and maximum margin try to predict the evaluation score using $\\log p$ during training), which is also the first insight for all these existing work from such a perspective. Much analysis (including the link between MRT and REINFORCE and the similarity and difference between minimum divergence and maximum margin) is original.\n\n2. We propose a new training criterion based on the analysis of existing work, and the new training criterion improves the baseline by a large margin. The use of Hellinger distance is novel for sure.", "Thanks for your helpful comments. \n\n1. “much of the analysis is not new”:\nThe relationship between RAML and reinforcement learning based criteria has been discussed in the RAML paper, while our contributions are:\na) linking MRT to REINFORCE, which is new.\nb) linking the max margin criterion used in [1] to Eq (10), which is new, by deriving from the analysis of RAML and MRT\nc) the training criterion, Hellinger distance, which is new.\n\nWe are sorry if you find something already existing in the analysis part, which is sometimes for better background comprehension. For example, it will be quite difficult for readers to understand if we remove the analysis of RAML and MRT. \n\n\n\n2. comparison with state of the art\nWe made a brief survey on previous state of the art models: \n\n1) Google’s Neural Machine Translation System (no code, no pretrained models)\n2) Convolutional Sequence to Sequence Learning (code + pretrained models)\n3) Transformer (code, no pretrained model uses our dataset)\n4) BERT (code is available, but is still under review now, and extra monolingual dataset is needed)\n\nConsidering our familiarity to existing code and the difficulty to modify it, we chose to re-implement our new training criterion on convolutional seq2seq (since BERT is still under review now, and no suggested hyperparameters are provided for Transformer on the small dataset we use). The results on IWSLT dataset are as follows:\n\ncriterion BLEU\nMLE baseline 32.14\nHellinger 32.30\n\nThe improvement is quite smaller than the results reported in our paper, however, the improvement does exist even on such a strong baseline. A potential reason for the smaller improvement is the batch size. In the standard implementation of MRT, we use the sample size of 100 and the batch size of 1 (due to the limited number of GPUs we have), while in the README files of both conv-seq2seq and Transformer, the authors stressed the importance of a large batch size. \nDue to the limit of computational resources by our hand, we have no way to explore the impact of the batch size for our case. However, we are aware that the recent reported results from a lot of literature have indicated that the larger batch size plays a crucial role for absolute NMT performance improvement. We are optimistic in hope that the relative small improvement is mostly due to such a factor. \n\n[1] Seq2seq learning as beam search optimization, Wiseman and Rush, EMNLP 2016\n", "Thanks for your comments. \n\nA. \nabout the focus/story: \nWe’ve changed the writing in abstract and introduction to fit the main story better. Thanks for the helpful comments. \nAbout the paper structure:\nActually we did consider changing the paper structure to give more space to Hellinger distance, however, in order to define $p$ and $q$, and explain why we minimize the Hellinger distance between $p$ and $q$, we do think we need to clearly describe RAML and MRT. Many necessary equations and the comparison of existing work have been introduced before the section of Hellinger distance. If we remove the sections of RAML and MRT, it will be more difficult to read. \n\n\nB. \nIt would be better to understand if we replace KL divergence by cross entropy, in which case neither RAML nor MRT needs the $q\\log q$ term. Now we change the KL divergence to cross entropy. \n\nAbout Eq 10: We made a mistake here before. $p$ in L_MRT should be replaced by $p’$. (since we are talking about Shen et al.’s MRT for NMT paper) We’ve changed the writing of Eq 10. \n\nAbout the connection between alpha and tau: we’ve changed the writing here. Our previous thought is only to point out that alpha and tau are in the same place after taking \\log and they can be simply understood as smoothing techniques. Since MT is more an engineering problem than a theoretical one, we sometimes cannot expect a perfect link among existing work. Our main goal of introducing the link among previous work (as mentioned by reviewer 2, some similar discussions have appeared in a previous paper) is to explain how our idea of using Hellinger distance comes. We’ve also changed the writing here. \n\nAbout the regularizer: as mentioned before, if we replace KL by cross entropy, then the comparison of having or not having the regularizer seems unnecessary. We will do some experiments on this question if we have enough time, while our main focus in the past two weeks was to address Reviewer 2’s concern (trying to rebase MRT and Hellinger distance onto a state of the art model). \n\n\nC.\nBoth $L$ need to be minimized now. Thanks for pointing it out. \n$q’$ is now defined. \nEq 6 has been corrected now. \n\n\nDiscussion of max margin:\n1. \\Delta needs to be nonnegative (otherwise Eq 13 will be wrong). We’ve changed the writing here.\n2. This has been corrected now.\n3. We’ve changed writing here. \n4. Previously we made some mistakes when writing this part. Now we have stated it more clearly.\n5. In the new Eq 12, the rightmost part is an upper bound of \\Delta. Since \\Delta is always nonnegative, the upper bound should also be nonnegative. Thus we don’t think it necessary to add a “max(0,)”.\n\nThe typos have been corrected now. \n\nAgain, thanks for your helpful comments. " ]
[ 7, 7, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2019_H1xD9sR5Fm", "iclr_2019_H1xD9sR5Fm", "iclr_2019_H1xD9sR5Fm", "rJxph-Uo3X", "ByeC4dDphX", "SJevAu1qhX", "rJxph-Uo3X" ]
iclr_2019_H1xQVn09FX
GANSynth: Adversarial Neural Audio Synthesis
Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - novel approach to audio synthesis - strong qualitative and quantitative results - extensive evaluation 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - small grammatical issues (mostly resolved in the revision). 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. No major points of contention. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers reached a consensus that the paper should be accepted.
train
[ "Skgsl-CCnm", "r1xjgxknaX", "rJxQcyy26m", "S1xnyJyhpm", "r1xhvRAoTX", "rke0sgTc2X", "BkgZIiNcn7" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an approach that uses GAN framework to generate audio through modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Experiments on NSynth dataset show that it gives better results then WaveNet. The most successful deep generative models are WaveNET, Parallel WaveNet and Tacotran that are applied to speech synthesis, the method should be tested for speech synthesis and compared with WaveNet, Parallel WaveNet as well as Tacotran.\n\nFor WaveNet, the inputs are text features, but for Tacotran, the inputs are mel-spectrogram. Here the inputs are log magnitudes and instantaneous frequencies. So the idea is not that much new.\n\nGAN has been used in speech synthesis, see \nStatistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks\nIEEE/ACM Transactions on Audio, Speech, and Language Processing ( Volume: 26 , Issue: 1 , Jan. 2018 )\n\nSo for this work, GAN's application to sound generation is not new.", "Thank you for your time and expertise in your review, we've addressed the key points below:\n\n> “...what do the authors see as 'high' resolution vis a vis audio signals?”\n\nIn the context of these audio datasets, we use “high” resolution to refer more to the dimensionality of the signal to model with a single latent vector, rather than the temporal resolution of the audio. The spectral “images” that GANSynth models, have 1024 frequencies, 128 timesteps, and 2 channels, [1024, 128, 2], which is roughly equivalent to a [295, 295, 3] RGB image. This puts the task comparable to some of the higher-resolution GANs for images.\n\n> “I am curious if we can adapt these ideas for recurrent generators as might appear in TTS problems.“\n\nWe agree that would be an interesting development. Recurrent generators, and even discriminators, would allow for variable-length sequences and variable-length conditioning as is common in speech synthesis or music generation beyond single notes. Our initial experiments at using recurring generators were not very successful, so we opted to adopt a better tested architecture for this study, but this is definitely still an area ripe for exploration.\n", "Thank you for your time and insight in your review. We've incorporated changes to the paper and respond to your main points below:\n\n> “Why didn't you train a WaveNet on the high-resolution instantaneous frequency representations?”\n\nThat’s an interesting avenue of research to explore. We trained WaveNets on the raw audio waveforms to provide strong and proven baseline models to compare against. Generating spectra with WaveNets is relatively unexplored and complicated by the high dimensionality at each timestep (number of frequencies * 2), each of which would have to be quantized in a traditional autoregressive treatment. It’s quite possible that 2 dimensional convolutions and autoregression could help overcome this, but then the model would most resemble pixelCNN and be far from a proven audio generation method for a strong baseline.\n\n> “I'm still not clear on unrolled phase which is central to this work. If you can, spend more time explaining this in detail, maybe with examples / diagrams? In figure 1, in unrolled phase, why is time in reverse?”\n\nApologies for the confusion. To help clarify, we’ve renamed the “unrolled” phase as “unwrapped” throughout the paper, which is better alignment to standards in the literature and popular software packages such as Matlab and Numpy (for example https://www.mathworks.com/help/dsp/ref/unwrap.html). We have also added text further describing figure 1 (2nd to last paragraph of introduction) to help explain unwrapping to be the process of adding 2*Pi to the wrapped phase whenever it crosses a phase discontinuity such as to recover the monotonically increasing phase. The time derivative of this unwrapped phase is then the radial instantaneous frequency.\n\n> “Figure 1 & 2: label the x-axis as time. Makes it a lot easier to understand.\n\nThank you for the helpful pointer. We’ve added time axis labels to the figures and have also labeled the interpolation amounts for the interpolation figure.\n\n> “sentence before sec 2.2, and other small grammatical mistakes. Reread every sentence carefully for grammar.”\n\nWe have read through the paper several times to revise grammatical mistakes including the sentence you highlighted.\n\n> “Figure 5 is low-res. Please fix. All the other figures are beautiful - nice work!”\n\nThanks for catching this! We’ve updated the figure to be high resolution.\n\n", "Thank you for your review. We've done our best to address your concerns with paper revisions and in the comments below:\n\n> “The method should be tested for speech synthesis and compared with WaveNet, Parallel WaveNet as well as Tacotran”\n\nWe agree that it would be very interesting to adapt these methods to speech synthesis tasks, but believe that this lies outside the scope of this initial paper on adversarial audio synthesis. As we note in AnonReviewer2’s comments, adapting the current methods to incorporate variable-length conditioning and generate variable-length sequences is a non-trivial extension and requires further research. In the context of this study, we’ve done our best to provide strong autoregressive baselines from state-of-the-art implementations of WaveNet models (including 8-bit and 16-bit output representations). \n\nThank you for highlighting that this is an important direction for this research. We have updated the text of the paper with a paragraph highlighting the importance and difficulty of pushing the current methods forward for more general audio synthesis tasks. \n", "We would like to thank all the reviewers for their thoughtful and helpful reviews. In addition to answering the points of each individual reviewer below, we also want to highlight several additions we have made to the appendix to hopefully improve clarity and reproducibility.\n\n* An additional figure displaying spectrograms for a Bach Prelude synthesized both with and without latent interpolation, the audio for which can be found in the supplemental. \n* Substantial experimental details to improve reproducibility, including detailed architecture parameters and training procedures.\n* An additional NDB figure highlighting the lack of diversity of WaveNet baseline samples. \n* A table of additional baseline comparisons, justifying the use of WaveGAN and 8-bit WaveNet as the strongest baselines. \n", "This paper proposes a strategy to generate audio samples from noise with GANs. The treatment is analogous to image generation with GANs, with the emphasis being the changes to the architecture and representation necessary to make it possible to generate convincing audio that contains an interpretable latent code and is much faster than an autoregressive Wavenet based model (\"Neural Audio Synthesis of Musical Notes with WaveNet AutoEncoders\" - Engel et al (2017)). Like the other two related works (WaveGAN - \"Adversarial Audio Synthesis\" - Donahue et al 2018) and the Wavenet model above, it uses the NSynth dataset for its experiments. \n\nMuch of the discussion is on the representation itself - in that, it is argued that using audio (WaveGAN) and log magnitude/phase spectrograms (PhaseGAN) produce poorer results as compared with the version with the unrolled phase that they call 'IF' GANs, with high frequency resolution and log scaling to separate scales. \n\nThe architecture of the network is similar to the recently published paper (Donahue et al 2018), with convolutions and transpose convolutions adapted for audio. However, there seem to be two important developments. The current paper uses progressive growing of GANs (the current state of the art for producing high resolution images), and pitch conditioning (Odena et al, where labels are used to help training dynamics). \n\nFor validation, the paper presents several metrics, with the recently proposed \"NDB\" metric figuring in the evaluations, which I think is interesting. The IF-Mel + high frequency resolution model seems to outperform the others in most of the evaluations, with good phase coherence and interpolation between latent codes. \n\nMy thoughts: \nOverall, it seems that the paper's contributions are centered around the representation (with \"IF-Mel\" being the best). The architecture itself is not very different from commonly used DCGAN variants - the authors say that using PGGAN is desirable, but not critical, and the use of labels from Odena et al. \n\nMany of my own experiments with GANs were plagued by instability (especially at higher resolution) and mode collapse problems without special treatment (largely documented, such as adding noise, adjusting learning rates and so forth). To this end, what do the authors see as 'high' resolution vis a vis audio signals? \n\nI am curious if we can adapt these ideas for recurrent generators as might appear in TTS problems. \n\nI rate this paper as an accept since this is one of the few existing works that demonstrate successful audio generation from noise using GANs, and owing to its novelty in exploring representation for audio. \n", "This is an exciting paper with a simple idea for better representing audio data so that convolutional models such as generative adversarial networks can be applied. The authors demonstrate the reliability of their method on a large dataset of acoustic instruments and report human evaluation metrics. I expect their proposed method of preprocessing audio to become standard practice.\n\nWhy didn't you train a WaveNet on the high-resolution instantaneous frequency representations? In addition to conditioning on the notes, this seems like it would be the right fair comparison. \n\nI'm still not clear on unrolled phase which is central to this work. If you can, spend more time explaining this in detail, maybe with examples / diagrams? In figure 1, in unrolled phase, why is time in reverse?\n\nSmall comments:\n\n- Figure 1 & 2: label the x-axis as time. Makes it a lot easier to understand.\n\n- I appreciate the plethora of metrics. The inception score you propose is interesting. Very cool that number of statistically-different bins tracks human eval!\n\n- sentence before sec 2.2, and other small grammatical mistakes. Reread every sentence carefully for grammar. \n\n- Figure 5 is low-res. Please fix. All the other figures are beautiful - nice work!" ]
[ 6, -1, -1, -1, -1, 7, 8 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_H1xQVn09FX", "rke0sgTc2X", "BkgZIiNcn7", "Skgsl-CCnm", "iclr_2019_H1xQVn09FX", "iclr_2019_H1xQVn09FX", "iclr_2019_H1xQVn09FX" ]
iclr_2019_H1xaJn05FQ
Sliced Wasserstein Auto-Encoders
In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets.
accepted-poster-papers
The paper proposed to add the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution to the auto encoder (AE) loss, resulting in a model named sliced-Wasserstein AE. The difference compared to the Wasserstein AE (WAE) lies in using the usage of the sliced-Wasserstein distance instead of GAN or MMD-based penalties. The idea of the paper is interesting, and a theoretical and an empirical analysis supporting the approach are presented. As reviewer 1 noticed, „the advantage of using sliced Wasserstein distance is twofold: 1)parametric-free (compared to GANs); 2) almost hyperparameter-free (compared to the MMD with RBF kernels), except setting the number of random projection bases.“ However, the empirical evaluation in the paper and concurrent ICLR submission on Cramer-World-AEs the authors refer to shows no clear practical advantage over the WAE, which leads to better results at least regarding the FID score. On the other hand, the Cramer-World-AE is based on the ideas presented in this paper (which was previously available on arxive) proving that the paper presents interesting ideas which are of value to the communty. Therefore, the paper is a bit boarderline, but I recommand to accept it.
train
[ "HyxabNG1xE", "rklTDDbky4", "H1lo8vWJyE", "ByeqWvbJyN", "BJg2A5pYn7", "rkg8bhIpRQ", "r1eh1O0KRm", "Skx63wCtA7", "ryeF5vAKAX", "rklRqLxY2m", "r1eZQY4dhm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers, \n\nWe are certainly grateful for your time and careful evaluation of our work. We did our best to respond to the issues you raised, including extending our experimental results along with theoretical clarifications. \n\nWe would greatly appreciate if you could take a second look at our paper and reevaluate based on the changes, our responses, and the additional information we provided regarding the “Cramer-Wold AutoEncoder” (CWAE) paper https://openreview.net/forum?id=rkgwuiA9F7 .\n\nThank you in advance for your time. ", "We also just became aware of another submission to ICLR2019, “Cramer-Wold AutoEncoder” (CWAE) https://openreview.net/forum?id=rkgwuiA9F7 , in which the authors cite our work and provide an extensive comparison of our method (SWAE) to WAE and their proposed framework (CWAE). This comparison was possible as we released our code in April 2018. The authors of CWAE kindly provide further quantitative results (See Figure 4 of their paper) for SWAE and WAE on CelebA (and for CIFAR10 in the supplementary material) comparing the FID score, and Mardia’s skewness and kurtosis of models WAE, SWAE and CWAE, on the CelebA test set. CWAE as the authors put it “can be seen as a borderline model between SWAE and WAE-MMD”, “which has a closed-form for the distance of a sample from standard multivariate normal distribution.” \n\nWe hope that the extensive analysis provided by the authors of CWAE, in addition to our extended experimental section and numerical analysis, to further convince the reviewer on the merit of our proposed method. ", "We also just became aware of another submission to ICLR2019, “Cramer-Wold AutoEncoder” (CWAE) https://openreview.net/forum?id=rkgwuiA9F7 , in which the authors cite our work and provide an extensive comparison of our method (SWAE) to WAE and their proposed framework (CWAE). This comparison was possible as we released our code in April 2018. The authors of CWAE kindly provide further quantitative results (See Figure 4 of their paper) for SWAE and WAE on CelebA (and for CIFAR10 in the supplementary material) comparing the FID score, and Mardia’s skewness and kurtosis of models WAE, SWAE and CWAE, on the CelebA test set. CWAE as the authors put it “can be seen as a borderline model between SWAE and WAE-MMD”, “which has a closed-form for the distance of a sample from standard multivariate normal distribution.” \n\nWe hope that the extensive analysis provided by the authors of CWAE, in addition to our extended experimental section and numerical analysis, to further convince the reviewer on the merit of our proposed method. ", "We thank the reviewer for the feedback and for re-evaluating the score. \n\nRegarding the comments on Table 1 and 2: \n\n(1)\tThe “Nowozin Trick” as named in the WAE-GAN code (on Github) by Tolstikhin et al. is the idea proposed in the Adversarial Variational Bayes [AVB] paper and it goes as follows. To perform adversarial discrimination between the encoded distribution Pz and the prior distribution Qz, one needs an optimal discriminator D_JS(Pz,Qz), (e.g. based on the Jensen-Shannon divergence), which is give by Dopt(x)=log(qz(x))- log(pz(x)) where for analytic qz we exactly know qz(x) (i.e. for a Gaussian qz). Therefore, it is valid to add log(qz(x)) explicitly to the discriminator and let it learn only the remaining term, log(pz(x)). In short, although this trick is useful for training, it requires analytic knowledge of the latent prior. Since a primary point of our paper is that SWAE does not require any such knowledge, we believe it was fair to disable it, as to facilitate.\n\n(2)\tAs for the FID scores, we used the WAE-GAN code, and ran it without the `Nowozin Trick’ on the CelebA dataset and are reporting the results we got from the code. The discrepancy could be due to several mismatches between the runs including, the si called ‘Nowozin Trick’, number of training iterations, and the preprocessing of the data. \n\n(3)\tRegarding the log likelihood scores, we measure the likelihood of the encoded samples \\phi(x), x ~ pX, to be generated from qZ, which is calculated as follow E_pX[ log(qZ(\\phi(x)) ]. We emphasize that this measure is only valid for prior distributions, qZ, with analytic form. \n\n\nWe also just became aware of another submission to ICLR2019, “Cramer-Wold AutoEncoder” (CWAE) https://openreview.net/forum?id=rkgwuiA9F7 , in which the authors cite our work and provide an extensive comparison of our method (SWAE) to WAE and their proposed framework (CWAE). This comparison was possible as we released our code in April 2018. The authors of CWAE kindly provide further quantitative results (See Figure 4 of their paper) for SWAE and WAE on CelebA (and for CIFAR10 in the supplementary material) comparing the FID score, and Mardia’s skewness and kurtosis of models WAE, SWAE and CWAE, on the CelebA test set. CWAE as the authors put it “can be seen as a borderline model between SWAE and WAE-MMD”, “which has a closed-form for the distance of a sample from standard multivariate normal distribution.” \n\nWe hope that the extensive analysis provided by the authors of CWAE, in addition to our extended experimental section and numerical analysis, to further convince the reviewer on the merit of our proposed method. \n\n[AVB] Mescheder, L., Nowozin, S. and Geiger, A., 2017. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722.\n", "The authors propose a new autoencoding algorithm for the unsupervised generative modeling which they call Sliced Wasserstein Autoencoders (SWAE). SWAE minimizes a reconstruction cost (measured with respect to the non-negative cost function c(x,x') defined for pairs of input images x, x'), regularized by a penalty measuring a discrepancy between the prior distribution over the latent space qz and the push-forward pz of the unknown data distribution through the deterministic encoder. The authors present an extensive theoretical argument supporting the choice of this objective and a number of empirical results performed on MNIST, LSUN bedrooms, and Celeba. \n\nEven though this paper raises several interesting questions, I have several major issues with it:\n****\n**** 1. Claim around Equation 3 is not proved.\n****\nAll the sections before 2.3 are providing a rather detailed theoretical argument meant to support the choice of the SWAE objective appearing in Eq. 14 of Section 2.3. Here I wand to point out to a mathematical inaccuracy in the authors' discussions, which may render the whole argument questionable. In short, the authors claim around Eq. 3 that \"Eq. 3 is equivalent to Theorem 1 of [1] for deterministic encoder-decoder pairs\" and don't provide any proofs for this nontrivial fact. \n\nThe following is based on some quick derivations I did while reviewing. \n\nRecall that in the current paper Px is the data distribution, Py is the push-forward of Px through the superposition of the encoder \\phi and decoder \\psi (in other words Py is a distribution of \\psi(\\phi(X)) when X is distributed according Px). The authors state that:\n \\inf_{\\phi, \\psi} Wc(Px, Py)\n is equivalent to\n (* ) \\inf_{\\phi, \\psi} E_{X \\sim Px}[ c(X, \\psi(\\phi(X))) ].\nIn other words, the authors state that using Theorem 1 of [1] they are able to show that minimizing a c-optimal transport distance between Px and Py (which is parametrized by \\psi and \\phi) is *equivalent* to an unconstrained optimization problem appearing on the r.h.s. of Equation 3. \n\nNow, the Theorem 1 of [1] referenced by authors states that if Pz is any prior distribution over the latent space and \\psi * Pz is its push-forward through the deterministic decoder \\psi, then the optimal transport between Px and the resulting latent variable model \\psi * Pz can be equivalently written as:\n (**) Wc(Px, \\psi * Pz) = \\inf_{f such that f * Px = Pz} E_{X \\sim Px}[ c(X, \\psi(f(X))) ].\nImportantly, note how the right hand side of (**) contains a constrained optimization over an auxiliary (encoder) function f, which does not appear at all in the left hand side. If the authors were to apply (**) directly, they would arrive at the following statement:\n \\inf_{\\phi, \\psi} Wc(Px, Py)\n is equivalent to\n (*** ) \\inf_{\\phi, \\psi} \\inf_{f such that f * Px = \\phi * Px} E_{X \\sim Px}[ c(X, \\psi(f(X))) ].\nFinally, comparing (*) stated by the authors and (***) obtained above, we see that (*) is obtained by selecting one particular function f = \\phi from the set {f such that f * Px = \\phi * Px}. Meanwhile, this set in general may contain multiple other functions f and as a result this only shows that (*) >= (***) (as we replace \\inf_f with one particular choice of f). However, in this case, I think it is indeed possible to show that (*) = (***). Imagine (***) has a global minimum at (\\psi_0,\\phi_0, f_0), that is the global optimum of (***) equals E_{X \\sim Px}[ c(X, \\psi_0(f_0(X))) ]. The same value can be achieved by (*) by setting \\phi = f_0. QED. \n\nOnce again, these are my preliminary derivations and they need to be checked. But it looks like the claim of the authors is indeed true. \n\n****\n**** 2. Empirical evidence is not convincing. ****\n****\nThe main topic of the paper is the unsupervised generative modeling, and the authors claim certain improvements in this field compared to the previous literature. Even though there are no ultimate evaluation metrics available in the field, recently the researchers started supporting their methods with several metrics, including FID scores. By now for most of the widely used datasets the state of art FID scores are well known. In all the experiments the authors provide pictures and interpolations (last row of Fig. 3, Fig. 5) without numbers. I would say nowadays presenting pictures is not enough (being too subjective) and at least some objective numbers (preferably FID) capturing the quality of generated samples should be reported. The authors go into detailed measurements of discrepancy between the aggregate posterior pz and the prior qz, but it is not clear how this affects the actual sample generation. Finally, it is not clear why the authors compare only to WAE-GAN and did not consider WAE-MMD, which is free of adversarial training (in contrast to WAE-GAN) and thus has a stable training and does not involve extra computations of updating the discriminator (as noted by authors on page 10).\n\n[1] Bousquet et al., 2017.", "I thank the authors for their reply. \n\nAfter giving it a thought, I indeed agree that the issue with one of the proofs I mentioned earlier does not indeed affect the main message of the paper (and moreover now seems to be completely fixed).\n\nCouple of comments on the Table 1 and Table 2:\n(1) What is Nowozin's trick? Is it important to mention it in the table legend?\n(2) In table 2 the FID score of WAE-GAN is reported to be 53. The authors also say explicitly that they use architectures similar to the \"Wasserstein Auto-Encoders\" paper, which reports score 42 for WAE-GAN trained on the same CelebA dataset. Is there a confusion?\n(3) How exactly do you evaluate the log likelihood in Table 1? If I am not mistaken, you are trying to evaluate either E_qz[ log pz(Z) ] or E_pz[ log qz(Z) ]. Even if qz is Gaussian, pz is intractable. ", "We thank the reviewer for a positive evaluation of our work and the constructive feedback. \n \nAs precisely pointed out by the reviewer, our paper provides:\n1. The theoretical grounds for using sliced-Wasserstein distance as a metric for distributions in deep learning applications\n2. Avoiding adversarial training (as in GANs) or the choice of an appropriate kernel with its corresponding data-dependent parameter (as in MMDs)\n3. Providing a numerical solution, which is compatible with SGD optimization, and a thorough error analysis of this numerical solution\n \nRegarding the double-blind comment, the ICLR’19 website specifically states that:\n \n“Submissions that are identical (or substantially similar) to versions that have been previously published, or accepted for publication, or that have been submitted in parallel to other conferences or journals, are not allowed and violate our dual submission policy. However, papers that cite previous related work by the authors and papers that have appeared on non-peered reviewed websites (like arXiv) or that have been presented at workshops (i.e., venues that do not have a publication proceedings) do not violate the policy. The policy is enforced during the whole reviewing process period.”\n \nWe would also like to point out that the arxiv submission is significantly different from our ICLR submission as it does not contain:\n \n1. Error analysis on our numerical method\n2. Quantitative performance measures on the three datasets\n \nand it lacks the theoretical discussion presented in the current submission.\n \nOnce again, we thank the reviewer for a precise evaluation of our work and recognizing the novelties of the proposed framework.\n", "We would like to thank the reviewer for the thorough evaluation of our paper.\n \n==> The innovation is a bit on the incremental level\n \nRegarding the innovation comment, here we reiterate our specific contributions in this paper.\n1. We introduce the sliced-Wasserstein distance (SWD) as a measure for minimizing the difference between distributions in the latent space of an auto-encoder. Our proposed method:\na. Does not require adversarial training as in WAE-GAN\nb. Does a better job at matching the latent distribution of the input data to the prior distribution (See Figures 3 and 12) on MNIST, and provides consistently high performance on CelebA and LSUN Bedroom datasets, and has the best likelihood scores.\nc. Does not require a choice of kernel and the corresponding kernel parameters (e.g., spread in RBF) as in WAE-MMD. \n2. We provide a thorough numerical section with error analysis that supports the effectiveness of the sliced-Wasserstein distance. Through empirical results, we demonstrate that SWD is a good approximation to the true Wasserstein distance.\n3. The proposed method has a very simple, yet elegant, numerical implementation, and provides a differentiable loss function, thereby permitting the application of stochastic gradient descent. \n \n \n==> The empirical results are fairly weak and the results of WAE-MMD are not reported.\n \nWe agree with the reviewer that the submitted manuscript lacked extensive quantitative results. We have updated our experimental section to provide more quantitative analysis. Specifically,\n \n1. We have added comparison with WAE-MMD for all datasets\n2. We have updated Table 1 to report results (i.e., SWD(p_Z,q_Z), SWD(p_X,q_X), and NLL(p(z|q_Z))) for SWAE, WAE-GAN, WAE-MMD (IMQ), and WAE-MMD (RBF)\n3. We have reported the FID scores for the CelebA and LSUN datasets\n \nWith regards to the use of MMD as a measure of discrepancy between the latent distributions, we would like to point out that:\n1. MMD is sensitive to the choice of the kernel (See for example Tolstikhin et al. 2017, in which the authors say: “We tried WAE-MMD with the RBF kernel but observed that it fails to penalize the outliers of $Q_Z$ because of the quick tail decay. If the codes $\\tilde{z} = \\mu_{phi}(x)$ for some of the training points $x \\in \\mathcal{X}$ end up far away from the support pf $P_Z$ (which may happen in the early stages of training) the corresponding terms in the U-statistic [i.e. the RBF kernel] will quickly approach zero and provide no gradient for those outliers.”)\n2. MMD’s computational complexity for each iteration is $\\mathcal{O}(N^2)$ where $N$ is the batchsize (as opposed to $\\mathcal{O}(Nlog(N))$ of SWAE in the worst case scenario)\n3. MMD has additional kernel parameters (e.g., the spread of the RBF signal) to be tuned for the dataset that could significantly affect the performance of the system.\n \nWe would like to thank the reviewer for a thorough assessment of our work and hope that we have addressed the concerns.", "We sincerely thank the reviewer for the exemplar review, and appreciate the depth of the provided feedback. Both points raised by the reviewer are valid and precise, and we agree with them.\n \n**Theory** Regarding the theoretical discrepancy around Equation (3), the concern comes from the fact that calculation of the Wasserstein distance (for d>1) requires solving an optimization to find an optimal coupling (i.e., transport plan as in the Kantorovich’s formulation) or an optimal transport map (from Monge’s formulation of the problem). Therefore, minimizing the Wasserstein distance between p_X and p_Y, with respect to encoder and decoder, \\phi and \\psi, should also contain a minimization over the set of transport plans or the transport map. This leads to the reviewer’s point that W_c(p_X,p_Y) \\leq E_{x \\sim p_X} c(x,\\psi(\\phi(x)). The r.h.s. of the inequality is the transport cost (using the parlance of optimal mass transportation) with respect to the transport plan \\gamma(x,y)=\\delta(y-\\psi(\\phi(x)))p_X(x) (induced by the encoder and decoder) which is not necessarily the optimal transport plan between p_X and p_Y, and hence the r.h.s. is greater or equal to the Wasserstein distance (i.e., the optimal transportation cost). \n \nThis was an oversight and is corrected in the updated manuscript. In addition, we have added an extensive theoretical discussion on this matter to the supplementary material. We, however, emphasize that our main contribution is on measuring the discrepancy between p_Z and q_Z and therefore the corrections with respect to Equation (3) do not influence the main message of the paper. \n \n**Empirical evidence** We agree with the reviewer that the submitted manuscript lacked more quantitative comparison like the FID scores for the generated samples and have updated our experimental section to provide more quantitative analysis. Specifically,\n \n1. We have added comparison with WAE-MMD for all datasets\n2. We have updated Table 1 to report results (i.e., SWD(p_Z,q_Z), SWD(p_X,q_X), and NLL(p(z|q_Z))) for SWAE, WAE-GAN, WAE-MMD (IMQ), and WAE-MMD (RBF)\n3. We have reported the FID scores for the CelebA and LSUN datasets\n \nHowever, we emphasize that the main point of our paper is not “unsupervised generative modeling”, but rather having control over the distribution of the embedded data in the latent space. The discrepancy between p_Z and q_Z is crucial to many applications, including transfer learning and domain adaptation, but does not necessarily increase the generated sample qualities. In fact, the updated Figures 3 and 12 indicate that creating a better match between p_Z and q_Z can, in some cases, impose too strong a constraint on the decoder, which reduces the match between p_X and p_Y (i.e., results in lower quality generated samples).\n \nFinally, we have added the comparison with WAE-MMD, which is a kernel-based method. MMD is an effective way of measuring distributional discrepancy between p_Z and q_Z, however, it has the following downsides:\n1. It is sensitive to the choice of the kernel (See for example Tolstikhin et al. 2017, in which the authors say: “We tried WAE-MMD with the RBF kernel but observed that it fails to penalize the outliers of $Q_Z$ because of the quick tail decay. If the codes $\\tilde{z} = \\mu_{phi}(x)$ for some of the training points $x \\in \\mathcal{X}$ end up far away from the support of $P_Z$ (which may happen in the early stages of training) the corresponding terms in the U-statistic [i.e. the RBF kernel] will quickly approach zero and provide no gradient for those outliers.”)\n2. MMD’s computational complexity for each iteration is $\\mathcal{O}(N^2)$ where $N$ is the batchsize (as opposed to $\\mathcal{O}(Nlog(N))$ of SWAE in the worst case scenario)\n3. MMD has additional kernel parameters (e.g., the spread of the RBF signal) to be tuned for the dataset that could significantly affect the performance of the system. In addition, the IMQ kernel implicitly requires knowledge of the latent distribution q_Z (through the C parameter), whereas our approach does not.\n \nAgain, we would like to thank the reviewer for the very precise evaluation of our work and setting a high bar for the reviewing process.\n", "This paper proposes training generative models with Wasserstein auto-encoders. It uses the sliced-Wasserstein distance to measure the dissimilarity between p_z and q_z.\n\nStrengths:\n1. This paper is easy to read. \n2. Concepts are introduced clearly. \n\nMy major comments are the following:\n1. The innovation is a bit on the incremental level, especially given the results from WAE (Tolstikhin, ICLR18). The training objective is the same as Eq(4) in the WAE paper. The only difference is that the dissimilarity measure between p_z and q_z used in this paper is the sliced- Wasserstein distance, while WAE used GAN/MMD-based penalties. The advantage of using sliced-Wasserstein distance is not clear to me either. \n\n2. The empirical results are fairly weak. The authors may consider reporting the sample qualities (e.g. FID) for all the methods. \n\n3. The results of WAE-MMD are not reported.\n", "This paper presents an extension of Wasserstein autoencoder (WAE) by modifying the regularization term in learning objective of variational autoencoder. This term measures the divergence between the distribution of the encoded training samples and the samplable prior distribution. The modification is based on the sliced-Wasserstein distance where the distance between two distributions is measured through slicing or projecting the high-dimensional distributions into one-dimensional marginal distributions. As a result, a closed-form solution to the integral in Eq. (9) is obtained via a numerical method. The adversarial learning in WAE, designed to fulfill the calculation of high-dimensional distance, can be avoided. In general, this is an interesting work by introducing new idea of sliced-Wasserstein distance.\n\nRemarks:\n1. A theoretical paper which addresses how and why the sliced-Wasserstein distance between p_z and q_z is reasonable to build a new variant of variational auto-encoder.\n2. Reformulating the Wasserstein distance into Monge primal formulation with the assumption based on the property of diffeomorphic mapping.\n3. As a result, the implementation based on the unstable adversarial training or the maximum mean discrepancy (MMD) training can be avoided. Computational attractiveness is assured. MMD needs the choice of kernel function which is basically a data-dependent design parameter.\n4. Provide an empirical numerical solution which is compatible with SGD optimization.\n5. The key idea of this paper is shown in Eq. (14). Learning objective is expressed in a deterministic way. However, the style of objective in Eq. (14) involves the stochastic learning.\n6. This paper is not actually doubly-blind reviewed. Authors have exposed their identities in arXiv.\n" ]
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, 4, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_H1xaJn05FQ", "r1eh1O0KRm", "Skx63wCtA7", "rkg8bhIpRQ", "iclr_2019_H1xaJn05FQ", "ryeF5vAKAX", "r1eZQY4dhm", "rklRqLxY2m", "BJg2A5pYn7", "iclr_2019_H1xaJn05FQ", "iclr_2019_H1xaJn05FQ" ]
iclr_2019_H1xipsA5K7
Learning Two-layer Neural Networks with Symmetric Inputs
We give a new algorithm for learning a two-layer neural network under a very general class of input distributions. Assuming there is a ground-truth two-layer network y = A \sigma(Wx) + \xi, where A, W are weight matrices, \xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network. The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input. Our algorithm is based on the method-of-moments framework and extends several results in tensor decompositions. We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks. Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions.
accepted-poster-papers
Although the paper considers a somewhat limited problem of learning a neural network with a single hidden layer, it achieves a surprisingly strong result that such a network can be learned exactly (or well approximated under sampling) under weaker assumptions than recent work. The reviewers unanimously recommended the paper be accepted. The paper would be more impactful if the authors could clarify the barriers to extending the technique of pure neuron detection to deeper networks, as well as the barriers to incorporating bias to eliminate the symmetry assumption.
train
[ "SkgfHt_PRX", "BygkLw_vAm", "HJlQpUuwC7", "H1eC-EI6n7", "rJxkjtHThX", "H1xu6gRmhm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your efforts in the review process. We really appreciate your valuable suggestions and detailed comments.\n\n-generalize technique to shifted input or bias term.\n\nOur current technique does not generalize to the case where the input is shifted or there is a bias term in the output. We think this is a very interesting open question and we are now discussing that in the conclusion. Note that many previous works (Goel et al. 2017, Ge et al. 2017) also do not handle bias terms. It has been empirically observed that for many networks fixing the bias to be 0 only makes the performance slightly worse.\n\n-generalize purifying idea to general depth neural network.\n\nOur idea basically removes the last linear layer given good understanding of what happens in previous layers. If there are results that can learn a p-layer network, it is possible that similar ideas could allow it to learn a p+1-layer network whose last layer is linear. However, there are no general algorithms for learning a neural network under symmetric input for p > 1, so we leave this as an open problem.\n\n-sample complexity of the algorithm\n\nWe've added a third plot in Figure 2 of the performance of our algorithm as a function of the dimension of A and W. One thing to keep in mind is that the number of parameters also grow quadratically with the dimension of A and W, so we expect the squared error to grow quadratically with the dimension of A and W. To account for this phenomenon, we've plotted the square root of the error normalized by the dimension of A and W so as to more clearly illustrate the extent to which our algorithm's actual performance deteriorates for high-dimensional A and W. \n\nAs illustrated by the flatness of the error curves, the performance of our algorithm remains stable as the dimension of A and W grows from 10 to 32. Note that this is much better than what our theory predicts and obtaining tighter sample complexity is an open problem. We believe that truly determining the exponent of our algorithm's asymptotic performance will necessitate evaluating the algorithm with much larger A and W. Such an experiment will require considerable computational resources and is beyond the present scope of the work.", "Thank you very much for reviewing our paper. We really appreciate your positive reviews and insightful comments.\n\nThanks for pointing out several related papers, we have already added these papers in our updated version. As you mentioned in the review, our technique cannot immediately apply to the case where the dimension of the output is smaller than the number of hidden units. This is a very interesting question, and we leave it as an open problem in the paper.", "Thanks a lot for your positive feedback! It’s definitely an important question to study general depth neural network, as we also discuss in the conclusions. We hope that our technique could be further improved to help to learn deeper neural networks.", "This paper pushes forward our understanding of learning neural networks. The authors show that they can learn a two-layer (one hidden layer) NN, under the assumption that the input distribution is symmetric. The authors convincingly argue that this is not an excessive limitation, particularly in view of the fact that this is intended to be a theoretical contribution. Specifically, the main result of the paper relies on the concept of smoothed analysis. It states that give data generated from a network, the input distribution can be perturbed so that their algorithm then returns an epsilon solution. \n\nThe main machinery of this paper is using a tensor approach (method of moments) that allows them to obtain a system of equations that give them their “neuron detector.” The resulting quadratic equations are linearized through the standard lifting approach (making a single variable in the place of products of variables). \n\nThis is an interesting paper. As with other papers in this area, it is somewhat difficult to imagine that the results would extend to tell us about guarantees on learning a general depth neural network. Nevertheless, the tools and ideas used are of interest, and while already quite difficult and sophisticated, perhaps do not yet seem stretched to their limits. ", "This paper studies the problem of learning the parameters of a two-layer (or one-hidden layer) ReLU network $y=A\\sigma(Wx)$, under the assumption that the distribution of $x$ is symmetric. The main technique here is the \"pure neuron detector\", which is a high-order moment function of a vector. It can be proved that the pure neuron detector is zero if and only if the vector is equal to the row vector of A^{-1}. Hence, we can \"purify\" the two layer neural network into independent one layer neural networks, and solve the problem easily.\n\nThis paper proposes interesting ideas, supported by mathematical proofs. This paper contains analysis of the algorithm itself, analysis of finding z_i's from span(z_i z_i^T), and analysis of the noisy case. \nThis paper is reasonably well-written in the sense that the main technical ideas are easy to follow, but there are several grammatical errors, some of which I list below. I list my major comments below:\n\n1) [strong assumptions] The result critically depends on the fact that $x$ is symmetric around the origin and the requirement that activation function is a ReLU. Lemma 1, 2, 3 and Lemma 6 in the appendix are based on these two assumptions. For example, the algorithm fails if $x$ is symmetric around a number other than zero or there is a bias term (i.e. $y=A \\sigma(Wx+b) + b'$ ). This strong assumptions significantly weaken the general message of this paper. Add a discussion on how to generalize the idea to more general cases, at least when the bias term is present. \n\n2) [sample efficiency] Tensor decomposition methods tend to suffer in sample efficiency, requiring a large number of samples. In the proposed algorithm (Algorithm 2), estimation of $E[y \\otimes x^{\\otimes 3}]$ and $E[y \\otimes y \\otimes (x \\otimes x)]$ are needed. How is the sample complexity with respect to the dimension? The theory in this paper suggests a poly(d, 1/\\epsilon) sample efficiency, but the exponent of the poly is not known. In Section 4.1, the authors talk about the sample efficiency and claim that the sample efficiency is 5x the number of parameters, but this does not match the result in Figure 2. In the left of Figure 2, when d=10, we need no more than 500 samples to get error of W and A very small, but in the right, when d=32, 10000 samples can not give very small error of W and A. I suspect that the required number of samples to achieve small error scales quadratically in the number of parameters in the neural network. Some theoretical or experimental investigation to identify the exponent of the polynomial on d is in order. Also, perhaps plotting in log-y is better for Figure 2.\n\n3) The idea of \"purifying\" the neurons has a potential to provide new techniques to analyze deeper neural networks. Explain how one might use the \"purification\" idea for deeper neural networks and what the main challenges are. \n\nMinor comments: \n\n\"Why can we efficiently learn a neural network even if we assume one exists?\" -> \"The question of whether we can efficiently learn a neural network still remains generally open, even when the data is drawn from a neural network.\"\n\n\"with simple input distribution\" -> \"with a simple input distribution\"\n\n", "This is a strong theory paper and I recommend to accept.\n\nPaper Summary:\nThis paper studies the problem of learning a two-layer fully connected neural network where both the output layer and the first layer are unknown. In contrast to previous papers in this line which require the input distribution being standard Gaussian, this paper only requires the input distribution is symmetric. This paper proposes an algorithm which only uses polynomial samples and runs in polynomial time. \nThe algorithm proposed in this paper is based on the method-of-moments framework and several new techniques that are specially designed to exploit this two-layer architecture and the symmetric input assumption.\nThis paper also presents experiments to illustrate the effectiveness of the proposed approach (though in experiments, the algorithm is slightly modified).\n\nNovelty:\n1. This paper extends the key observation by Goel et al. 2018 to higher orders (Lemma 6). I believe this is an important generalization as it is very useful in studying multi-neuron neural networks.\n2. This paper proposes the notation, distinguishing matrix, which is a natural concept to study multi-neuron neural networks in the population level.\n3. The “Pure Neuron Detector” procedure is very interesting, as it reduces the problem of learning a group of weights to a much easier problem, learning a single weight vector. \n\nClarity:\nThis paper is well written.\n\nMajor comments:\nMy major concern is on the requirement of the output dimension. In the main text, this paper assumes the output dimension is the same as the number of neurons and in the appendix, the authors show this condition can be relaxed to the output dimension being larger than the number of neurons. This is a strong assumption, as in practice, the output dimension is usually 1 for many regression problems or the number of classes for classification problems. \nFurthermore, this assumption is actually crucial for the algorithm proposed in this paper. If the output dimension is small, then the “Pure Neuron Detection” step does work. Please clarify if I understand incorrectly. If this is indeed the case, I suggest discussing this strong assumption in the main text and listing the problem of relaxing it as an open problem. \n\n\nMinor comments:\n1. I suggest adding the following papers to the related work section in the final version:\nhttps://arxiv.org/abs/1805.06523\nhttps://arxiv.org/abs/1810.02054\nhttps://arxiv.org/abs/1810.04133\nhttps://arxiv.org/abs/1712.00779\nThese paper are relatively new but very relevant. \n\n2. There are many typos in the references. For example, “relu” should be ReLU.\n\n\n\n\n" ]
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "rJxkjtHThX", "H1xu6gRmhm", "H1eC-EI6n7", "iclr_2019_H1xipsA5K7", "iclr_2019_H1xipsA5K7", "iclr_2019_H1xipsA5K7" ]
iclr_2019_H1xsSjC9Ym
Learning to Understand Goal Specifications by Modelling Reward
Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples.
accepted-poster-papers
Pros: - The paper is well-written and clear and presented with helpful illustrations and videos. - The training methodology seems sound (multiple random seeds etc.) - The results are encouraging. Cons: - There was some concern generally about how this work is positioned relative to related work and the completeness of the related work. However, the authors have made this clearer in their rebuttal. There was a considerable amount of discussion between the authors and all reviewers to pin down some unclear aspects of the paper. I believe in the end there was good convergence and I thank both the authors and reviewers for their persistence and dilligence in working through this. The final paper is much better I think and I recommend acceptance.
train
[ "H1gboYgu2Q", "BkxYUMmnRX", "BygfD88v3Q", "BkgNO00FCm", "SkeqSjhY0X", "ryxWDrhFCQ", "SkeGFQD2aX", "SJlxrIwc6m", "SklRX9tw67", "HkgsOyrm67", "BJlGrMrza7", "rylCieSMp7", "Skl_UbSMTX", "r1emqB4Gp7", "rkeb7SNMTm", "HkextZ7MaQ", "H1llub7G6m", "Bkehr-QM67", "rkgpQWQMam", "H1lXVonl6m", "B1gzqfpeTX", "rJebwzal67", "rklL8IgSn7", "rygJ2iPkpQ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author" ]
[ "The previous version of the paper was not clear enough in the motivation and uniqueness of the work. After a long and devoted discussion with the authors, we agreed on certain ways of improving the paper presentation, including connection to some related work. \n\nThe current paper is much better, so I would like to raise my score to 6. My revised review is: \n\n[orginality and significance]\n\n+ The paper deals with a challenging navigation problem where natural language instructions can be underspecified and the environment is complex---thus a correct reward function being extremely hard to craft. \n+ The paper proposed to use a <instruction, state> discriminator D to compute a pseudo reward at each step, which is then used to reinforce an agent in natural-language-guided navigation task. The paper proposed to train the discriminator in an adversarial way---with expert supervised data. The idea is neat, and its effectiveness is empirically supported by extensive experimental results. \n\n[clarity]\n\n+ The paper is well-written. The method is introduced with clear textual description, rigorous math formulations, and good illustration (Figure-1 and -2). The experiments are also well-documented, including training and testing details, results and analysis. \n\n[quality]\n\n+ The paper was not clear at certain points but the authors had helpful discussions with me and the paper was revised accordingly. \n+ The experiments were done with multiple random seeds, so I believe the results are convincing. The authors did not only show the numerical results but also shared qualitative videos through anonymous URL. Overall, it is a good paper.\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nBelow is my original review\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\n[PROS]\n\n[originality and significance]\n\nThe paper proposed to use a <instruction, state> discriminator D to compute the reward at each step, which is then used to reinforce an agent in natural-language-guided navigation task. The paper proposed to train the discriminator in adversarial way. The idea is neat, and its effectiveness is empirically supported by extensive experimental results. \n\n[clarity]\n\nThe paper is well-written. The method is introduced with clear textual description, rigorous math formulations, and good illustration (Figure-1). The experiments are also well-documented, including training and testing details, results and analysis. The experiments were done with multiple random seeds, so I believe the results are convincing. The authors did not only show the numerical results but also shared qualitative videos through anonymous URL. Overall, it is a good paper. \n\n[CONS]\n\n[quality]\n\nThe major issue of this paper is the lack of connection to existing related work in the field of dealing with reward sparsity problem. This is a long-standing problem in RL (very common in, but not only restricted to, navigation tasks) and people have proposed reward shaping techniques to handle it. But the paper did not discuss any work in this direction. For references, please first check this seminal work and then follow the line of research: \n\nNg, Andrew Y and Harada, Daishi and Russell, Stuart, ICML 1999, Policy invariance under reward transformations: Theory and application to reward shaping\n\nThe method proposed in this paper seems a way of automatically shaping the reward, but loses the optimal policy invariance (for how this invariance is ensured in reward shaping, please check out this tutorial: http://www-users.cs.york.ac.uk/~devlin/presentations/pbrs-tut.pdf). \n\nThe proposed method has two key components: 1) the discriminator D; and 2) the adversarial training. The method is shown effective in experiments and outperforms appropriate baselines with actual reward. But the design of D and how it is used as reward function seems somewhat ad-hoc. \n\nD is only trained on the final states of episodes (please correct me if I am wrong), but is used at all the steps as part of reward function to determine the stepwise reward, which seems odd. The authors should discuss what (implicit) assumptions they are relying upon to make this method work in this way. The transformation function from D to reward value seems ad-hoc---e.g. why 0.5, why indicator function instead of others (e.g. scaling of indicator function), how it is generalized to non-1/0 (but still sparse) reward cases, etc? Is the method only designed for 1/0-reward cases? The authors should clearly specify if it is the case. \n\nMoreover, the paper compared to RP (Jaderberg 2016), which still reinforces the agent with actual reward but only *shapes the features of the agent* by multi-tasking on predicting the reward of next step (please correct me if this is wrong). Interestingly, the RP method achieves better performance than the proposed method, although it does not address the reward sparsity problem. Could the authors provide any insight about why this happened? Is there any trade-off between these two methods? Is there any setting, in the authors’ opinion, where the proposed method should outperform RP? \n\n[SUMMARY]\n\nI think this is good work---neat idea, nice results and clear writing. But there are indeed some issues that I hope the authors could address. So I gave a score of 5. \n", "We thank Reviewer 3 for taking the time to consider our response and giving us a detailed explanation of why our current presentation of differences between AGILE and GAIL may be confusing. In the camera-ready version we will make sure to move the explanation of these differences from Section 4 “Related Work” to Section 2 “Adversarial Goal-Induced Learning from Examples”. The results of the experiments with log D(...) as the reward will also be included in the paper, either in the main text or in the appendix. \n", "\n==========\nUpdate\n==========\n\nUpon reviewing the paper revision and the author comments to my and the other reviewers' comments, I will revise my suggestion to that of acceptance. As I said in my summary, my primary concern was novelty with respect to prior work which the authors have clarified. They have also increased the rigor of their experimental results by providing variances in the plots.\nI think this work will be of interest to the community.\n\n\n==========\nStrengths:\n==========\n\n- The problem of learning to predict state rewards given language in interesting and useful. \n\n- The proposed AGILE framework is intuitively simple and works with any existing RL framework.\n\n- With the models and tasks explored in this paper, the approach does seem to learn to evaluate whether a state matches the instructions quite well. \n\n- The writing is very clear and direct. \n\n==========\nConcerns:\n==========\n\n[A] The discussion of differences to the closely related GAIL methodology is left until the related work after experiments. Given the similarities between GAIL and AGILE, this seems too late. The authors list three major differences between AGILE and GAIL:\n\t\n1) AGILE is conditioned on a goal specification, language in this case. GAIL is unconditioned and trained for one task.\n2) AGILE takes only the final/goal state rather than a trajectory like in GAIL.\n3) AGILE discretizes the discriminator probability when assigning reward, GAIL does not.\n\nSome concerns about each:\n\t\t\n1) This is an interesting and fair difference but also a necessary and somewhat obvious modification to GAIL in tasks with explicit goal-specification. \n\n2) This does not seem like an improvement, but rather a loss of generality. The authors justify this change saying \"in AGILE the reward model observes only states s_i (either goal states from an expert, or states from the agent acting on the environment) rather than traces (s1, a1),(s2, a2), . . ., learning to reward the agent based on “what” needs to be done rather than according to “how” it must be done.\" \n\nIn many real applications, the how is deeply important. For instance, navigation in the world is both a \"what\" (arrive at location X) and a \"how\" (in fastest time without hitting anything or in such a way that humans aren't frightened). Further, the trace includes the final state such that the \"what\" is recoverable in instances where the \"how\" is unimportant, as in the set of tasks presented in this paper. \n\n3) Letting the paper speak on this subject: \"We considered this change of objective necessary because the GAIL-style reward would take arbitrarily low values for intermediate states visited by the agent, as the reward model will be confident as those are not goal states. The binary reward in AGILE carries a clear message to the policy that all non-goal states are equally undesirable.\" Firstly, all non-goal states are not equally undesirable in that some lead more easily to goal states though it is fair to argue this should be learned by the policy through expected reward. My primary gripe is the footnote following these sentences which says: \"We tried values other than 0.5 for the binarization threshold, as well as not binarizing and using Dφ(c, st) directly as the reward. We got similar but slightly worse results.\" This seems to imply that this difference does not matter significantly, especially if different thresholds received significantly different hyperparameter tuning effort or were not conducted under multiple runs of random seeds.\n\nA pessimistic summary would place AGILE to be a conditional GAIL with reduced ability to represent intermediate or trajectory based rewards and a possibly slightly helpful reward discretization scheme. Don't get me wrong, I think an conditional extension to GAIL is interesting and worth sharing with the community. However, this discussion comes very late and includes a design decisions (2/3) that I find poorly justified in text and completely unjustified experimentally. \n\nI would like to hear from the authors if any of these criticisms are inaccurate. I would also welcome experiments evaluating the effect of these design decisions.\n\n[B] In 3.2 its reported that each experiment was repeated five times however the presented results are not described as means and no variances are shown. I would like to see the results plots with shaded variances from at least 5 runs with differing random seeds. \n\n[C] Unless I'm mistaken, the proposed architecture could also be trained with reward prediction. It would be interested in that case to see if improvement seen between A3C and A3C-AGILE extend to A3C-RP and A3C-RP-AGILE. As the authors note, the AGILE framework simply changes the source of the reward and is amicable to any RL approach. I would like to see this comparison.\n\n[D] The reward generalization experiments seemed surprising to me. The policy was fine-tuned on the test environments but only improved from 52% to 69.3%. Trying to think about this more, I'm having trouble disentangling whether this implies poor generalization of the reward function or increased difficulty in policy learning. Could the authors provide the A3C and A3C-RP baselines for this experiment to help clarify?\n\n[E] Just a Curiosity: What exactly is done in L2 weight clipping? (Training details in supplement)\n\n[F] Just a Thought: In the reward-prediction (RP) setting, both the RP model and the policy share parameters. It would be possible with such an architecture to still apply the AGILE loss and I would be curious to see if this leads to interesting changes in performance. I understand that one of the advantages to learning a separate reward model is to generalize to new policies, but it is unclear if this approach would generalize less well (and finding it out would be cool!)\n\n==========\nOverview:\n==========\n\nI think extending generative adversarial imitation learning to a task-conditional setting a cool step made even more interesting in this work by having the task-specification be in compositional language. Further, the results and analysis are generally interesting though I do note some weaknesses above. Aside from some questions about the experiments, I'm mostly concerned about the positioning of the paper -- specifically with respect to prior work. I'm looking forward to hearing from the authors and other reviewers. \n\n\n", "I thank the authors for their replies to reviewer comments throughout. I've read them as they've come in but have to apologize for my slowness in replying.\n\nRe: \"We will be honest: if indeed the crucial failing of the paper is its failure to successfully position itself with regard to GAIL, in particular due to the placement of the paragraph discussing comparison, we find that a score of 4 is a little strict.\"\n\nAs we are speaking honestly (though I hope we need not differentiate between doing so and not), given what I take to be strong similarities between AGILE and GAIL, it's discussion being presented so late and without much fanfare left me as a reader with the lingering sense of being duped! There are strong ties between the two methods both in terms of mechanics and motivation that should be discussed -- if for no other reason than to allow readers new to the area a useful notion of heredity. While a rating of 4 is a bit strict, it was given as a worst case should you folks not respond. As I said in my summary, I did in fact look forward to hearing from you :) \n\nAnd hear from you I have! Thanks for the thorough response to my points / questions. \nI'll respond to a few below:\n\nRe: Trajectories vs. End States\n\nI appreciate the argument for the flexibility of end-state specification. I will point out that as far as I understand, in both AGILE and GAIL the intermediate trajectory is rewarded according to the trained discriminator -- the key difference being GAIL's reward predictor is trained on trajectories whereas AGILE's is trained on goal state / condition pairs. I think a clearer discussion of the limitations (and flexibility) this choice provides would be very useful. Further, experimental comparison with and without full trajectory training of the discriminator would be very useful.\n\n\nRe: Form of Reward\n\nThank you for the clarification. I did miss the log being dropped. With a refocusing of this section on this difference rather than the discretization, this should be fine. Doubly fine if an appendix were to include ablations of these choices.\n\n\nRe: Extension to Instruction-Conditional IRL\n\nAs I said in my review, this is an interesting and fair difference and as I echo in my summary an interesting part of this work. I do maintain it is somewhat an obvious extension though that does not limit its usefulness. Listing it under a concern was thoughtless of me.\n\n\nRe: Variances\n\nThanks for adding the variances! Given the high variance of RL methods in general, adding these is a significant help to the community! Looking over these, the claims of the paper still hold well.\n\n\nI will revise my review rating to reflect my increased confidence in this submission. As a personal note, I would like to thank you for your detailed responses to my concerns and those of my much-more-talkative fellow reviewers. The discussion as a whole has been valuable. \n", "We would like to report results of several additional experiments that we have run based on the valuable suggestions by Reviewer 3 (R3).\n\n- We have tried to use GAIL-style rewards, r_t = log D(c, s_t). We found that such a modification of AGILE would not perform better than 70% success rate, much as expected. The policy’s return \\sum_t log D(c, s_t) would keep decreasing all the time, as the discriminator got confident in rejecting goal states and outputted very low values of log D(c, s_t), in line with our intuition that using GAIL rewards results in punishing the discriminator arbitrarily for entering intermediate states. We would be happy to incorporate these extra results in the paper should R3 find it necessary. \n- We have compared results of a fine-tuned AGILE policy on the immovable red square task with those of RL with ground truth rewards. We used the soon-to-be-open-sourced PPO-based reimplementation. In this reimplementation fine-tuning PPO-AGILE has improved the performance from 65% to 82%, which is pretty close to the 86% success rate of the PPO baseline. From this we conclude that imperfect 82% performance of fine-tuned PPO-AGILE is mostly due to the increased difficulty in policy learning, and not poor generalization of the reward model. \n\nWe hope that R3 finds these extra results informative. We believe our prior arguments about differences between AGILE and GAIL together with these extra results address most of R3’s concerns, and we hope that they considering revisiting their evaluation of the paper.\n\nBesides, the extra results presented above, we have uploaded a revised PDF with the following changes:\n- standard deviations are now displayed as shades in all plots\n- weight norm clipping is explained better in Appendix and a reference to the paper where this method was proposed is added \n", "We thank the reviewer for revising their score, and more importantly, for actively participating in the discussion period. As you well know, with the growth of the field, it has been difficult for conferences to maintain a consistent standard of examination throughout the review process whereby reviewers are guaranteed to be thorough and fair. We appreciate your commitment to the clarification of this paper during this discussion, and without intent of flattery, can assure you that you have been thorough. We ask now, with utmost respect, that you be fair.\n\nYou have revised your review to indicate, through your score, that the paper is marginally acceptable. Yet the substance of your review indicated that, thanks in part to the clarifications you assisted us in producing, the paper is good. You kindly state, in support, that “the paper deals with a challenging navigation problem”, that “the idea is neat, and its effectiveness is empirically supported”, that “the paper is well-written”, and “the results are convincing”. On the basis of this review, you conclude “Overall, it is a good paper”.\n\nWe appreciate how this request may come across, but after the effort you yourself have committed to the improvement of this paper, we only ask that you either consider assigning it a score which reflects the strength you find in it, or alternatively give us some indication of where it still falls short so as to merit a borderline score. We appreciate, as always, your patience and diligence in this matter, and will respect your decision either way.", "\nThanks. I think the 1st paragraph of intro (as well as other modified parts) is much better now. \n\nA brief example in abstract may still help, e.g., *the complexity of the environment and the language scales---e.g. what is the goal state of \"build an L-like shape from red blocks\" while there might be infinitely many valid positions and orientations of the target shape.*---but this is not crucial. The Figure-1 is obvious enough. \n\nI would like to revise my review and score. ", "Thank you for your recommendations. We are happy to make modifications to the paper to ensure both greater clarity and better links to existing research. We hope you will find them appropriate, and are open to further tweaks if you judge they are necessary.\n\n> The key motivation/challenge of this work is NOT obvious enough in either abstract or introduction.\n> Maybe you can try having one example in your abstract and introduction? [...] you only need to adjust the positions of examples and make them more connected to the relevant text.\n\nWe make the reason for this paper clearer by adding an example motivating, in our environment, the design of such reward models, in the new Figure 1 which is placed in the introduction.\n\n> I still recommend you to discuss connections to reward-shaping work---it can be light as a few sentences, but it is indeed connected---the simplest setting that you care may have an easy one-to-one instruction->goal state mapping so reward-shaping can be done. In other cases, your work is the solution. \n\nWe add, in the discussion, after “Our analysis of the reward model’s classifications gives a sense of how this is possible; the false positive decisions that it makes early in the training help the policy to start learning.”, the following sentences: “The fact that AGILE’s objective attenuates learning issues due to the sparsity of reward states within episodes in a manner similar to reward prediction suggests that the reward model within AGILE learns some form of shaped reward (Ng et al, 1999), and could serve not only in the cases where a reward function need to be learned in the absence of true reward, but also in cases where environment reward is defined but extremely sparse. As these cases are not the focal area of this study, we note this but leave such investigation for future work.”\n\nPlease see the updated PDF for changes.\n", "\nGreat to be on the same page. \n\nThe real problem is: The key motivation/challenge of this work is NOT obvious enough in either abstract or introduction.\n\nYour rebuttal/clarification was very helpful, and I think you could extract useful content in our back-and-forth to make your points more highlighted. \n\nSome examples in abstract and introduction may help. Textual descriptions may sometimes be misleading and ambiguous (see---ambiguity is a really big issue :-). You wrote: \n\n*designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales.* (in abstract)\n\n*This interpreter must be able to evaluate the instruction against environment states to determine what reward must be granted to the agent, and in doing so requires full knowledge (on the part of the designer) of the semantics of the instruction language relative to the environment* (in introduction)\n\nI could also find other examples like these, but none of them was clear enough that: for any given instruction, there may be many (or even infinite) final states that are correct, and deciding on whether a given one is correct may have to get human involved. But an example might be much clearer. \n\nYou had GridLu Arrangements examples in Figure-2 that could illustrate your point of *many goal states for any instruction*, but not explicit enough (and a little lack of textual description). \n\nMaybe you can try having one example in your abstract and introduction? (Of course, the one in the abstract should be quite short.) Maybe you can put the most challenging example in your dataset to the introduction, in order to show the challenge and motivate your work. I am not suggesting a major surgery to the paper structure---instead, you only need to adjust the positions of examples and make them more connected to the relevant text. \n\nI still recommend you to discuss connections to reward-shaping work---it can be light as a few sentences, but it is indeed connected---the simplest setting that you care may have an easy one-to-one instruction->goal state mapping so reward-shaping can be done. In other cases, your work is the solution. \n\nI would like to raise my evaluation score as I believe these are fairly easy to do in a proper way within the rebuttal period. \n\nI would also encourage you to compare with reward-shaping in the datasets where a reward-function is indeed accessible---it is interesting to know. But since it is not the focus of the paper, this comparison is optional.", "We thank you again for continuing to engage in this discussion and for reading our lengthy response. We truly appreciate your patience and understanding.\n\n> In the GridLU-Arrangements task, there are many possible goal states for any given instruction, each of them should be associated with ground-truth reward 1 (that is used for success rate computation), and other states should give ground-truth reward 0.\n\nThis is correct, although this is true of many tasks and environments across the RL literature (although not all).\n \n> However, there may be a lot of ambiguity and uncertainty in the natural language instructions, and thus variabilities in the goal states.\n\nThe motivation for methods like AGILE is to deal with cases where we want to learn policies that act based on ambiguous or under-specified instructions, for which is may be difficult or impossible to write a reward function within the environment. Instructions in GridLU-Relations may be underspecified, but not to the point where we can’t write a reward function (which admittedly required a surprising amount of code despite the relative simplicity of the instruction language); but this is not the important point, the important point is that we show that a framework which does not have access to the reward (even if the reward function is implemented) and can train a policy as well as one which does, and hence that this framework is useful in environments where it doesn’t have access to the reward *because* the reward function isn’t implemented.\n\n> Even though one can find human annotators for evaluation, it is not practical to use them for training the agents.\n\nYes, being able to deal with such scenarios is the motivation for the development of this framework. We emphasise again that this one of the key underlying motivations for all IRL approaches, not just ours.\n\n> Am I right? \n\nWe think we are on the same page now!\n\nThe question we would put to you, if this addressed your concerns, is (perhaps upon re-reading the relevant sections) is there anything we could clarify to avoid this misunderstanding our ambiguity in portraying our method and its motivation? We have a clear picture of why and how our framework is used, and thus are biased against perceiving unhelpful ambiguities in our presentation, but we are ready to accept that they are there if an informed reader such as yourself had trouble with this aspect of the paper. We are confident that, if there’s anything we could tweak to improve the clarity of the paper, we can do so during the rebuttal period with your help as a byproduct of this discussion.\n", "As I said in previous replies, the recommendation of future work is not taken into account of the submission evaluation---I only mention it for the completeness of my review. You are of course free to decide what to include or not :-). ", "\nThanks for the long and detailed response. \n\nI think we are now converging on some point and let's get it clear here. \nAfter carefully reading all your replies (part1---4), I summarize my understanding here:\n\nIn the GridLU-Arrangements task, there are many possible goal states for any given instruction, each of them should be associated with ground-truth reward 1 (that is used for success rate computation), and other states should give ground-truth reward 0. However, there may be a lot of ambiguity and uncertainty in the natural language instructions, and thus variabilities in the goal states. So there is no (ground-truth) function available that can identify if any given state is the goal or not---except for human annotator. Even though one can find human annotators for evaluation, it is not practical to use them for training the agents. \n\nTLDR: There is no (easy-to-be-)known ground-truth reward function. \n\nAm I right? \n\nRemarks: As I said, I think this is a good paper, and I would like a deep discussion with authors to give a fair evaluation. And I totally understand the authors are eager to convince me to re-evaluate the paper. But before we move on, we should first be on the same page about this above point. ", "See my reply to part 3 for details. ", "(see part 1 for the beginning of the response)\n\nBearing in mind our main argument that GAIL and AGILE have fundamental differences which can and will be presented in more detail in the revision we are drafting, with the help of your comments and questions, we hope that we have addressed the core concern that “AGILE to be a conditional [IRL, rather than GAIL] ] with reduced ability to represent intermediate or trajectory based rewards and a possibly slightly helpful reward discretization scheme” in that it presents a novel approach to learning inherently multi-task (language-conditional) policies from expert data, provides a set of experiments to prove the concept thereof, and while differentiating itself from related work, is reasonably clear about its self-imposed limitations and about where future research can make improvements.\n\nWe conclude our response to by replying to the rest of your comments:\n\n[B] Thank you for the suggestion to plot shaded variances, we will do so and update our submission as soon as possible.\n\n[C] In our early investigations the reward prediction objective did not help AGILE as much as it helped A3C. We find it unsurprising as reward prediction is especially necessary in cases when the reward is sparse, and AGILE’s reward is more dense than the groundtruth one. This may also have something to do with the fact that the AGILE rewards are constantly changing, and hence predicting them may be not as valuable of an objective. \n\n[D] “We thank the reviewer for posing this very interesting question. We trained the A3C-RP baseline with immovable red square in our preliminary investigations and it also did not achieve a perfect performance. We believe that learning the policy that can deal with the remaining corner cases may be the bottleneck in this case, however, we will rerun this experiment and update you on the exact numbers.“\n\n[E] The L2 weight clipping makes sure that incoming weights of a neuron have a total L2 norm of at most C, where C is a hyperparameter. This regularization is a hard alternative to weight decay proposed (to the best of our knowledge) in https://arxiv.org/pdf/1207.0580.pdf . We will make sure to add a description of this method in Appendix. \n\n[F] Thank for this interesting experiment suggestion. Sharing weights between the policy and the discriminator is indeed an option, which we have not tried mostly because of the stability concerns. We will definitely consider this in our future work. \n", "We thank reviewer 3 for their very detailed comments, which while critical of the paper, give us much to respond to, and much to consider with regard to improving the paper. We will be honest: if indeed the crucial failing of the paper is its failure to successfully position itself with regard to GAIL, in particular due to the placement of the paragraph discussing comparison, we find that a score of 4 is a little strict. However, we appreciate the reviewer has been very rigorous in explaining their points regarding this weakness. We hope that through discussion we will both be able to present cogent counter-arguments to these objections where applicable, and otherwise satisfy the reviewer that the positioning of the paper can be improved, to their satisfaction, with minor revisions to the paper.\n\nAs the reviewer correctly summarized and if we put the conditioning on instructions aside, a key key difference (item 2) of AGILE from GAIL is the fact that AGILE uses images of goal-states as the training data instead of state-action trajectories required for GAIL. We believe that this difference does make AGILE applicable to situations where GAIL would not be. For example, consider the case when the expert that provides demonstrations has effectors different from that of the trained agent (image a human teaching their household robot to arrange objects on a table: the robot won’t necessarily have 5 fingers). Besides, methods such as GAIL force the learner to imitate the training trajectories in their entirety, which can be a limitation when training trajectories are suboptimal (the above example with the human is applicable again, human may not use the shortest path to the goal when they perform their manipulations). Relying on goal-states only brings a certain extra flexibility that GAIL-like methods do not provide, admittedly, at the cost of restricting the applicability of AGILE to declarative instructions, i.e. those which can be verified by the final state only. We believe, however, that such instructions make a common and frequent case in instruction-following setups, and methods that are optimized for this case (such as AGILE ) are worth investigating.\n\nReviewer has also requested a clarification with regard to the difference between rewards provided by the discriminator in GAIL and AGILE. We admit that the paper may have been not clear enough on this point, as it stressed the discretization of the reward, whereas the real key difference is the switch from log D(...) as the reward in GAIL to (possibly discretized) D(...) in AGILE. The discriminator is trained to output arbitrary low values of log D(c, s) in AGILE for the non-goal-states, and using log D(c, s) as the reward would arbitrarily punish the policy for entering intermediate states that are clearly not goal-states but yet may be useful in the near future. We are currently performing an extra experiment to validate this claim, in which we use log D(c, s) as the reward, and so far we do not see the job getting above 70% success rate, confirming our intuition that GAIL-style log D(c, s) is not an appropriate reward for AGILE. We will update you on the progress of this additional investigation. \n\nNotwithstanding the above differences with GAIL, it’s true that GAIL and AGILE both can be broadly characterised as inverse RL methods, and as such AGILE could be presented as “just” extending IRL to the instruction-conditional case (and we believe we are reasonably up-front about this in the introduction). With all due respect, it is unfair to characterise this observation as a concern, or qualify it as an “obvious extension”: it is, to our knowledge, one of the first substantial studies of instruction-conditional IRL over pixel observations. Experiments show generalisation of the reward model to both new observed states and held-out instructions. The model and objective design underpinning these results are, while inspired by GAIL and related techniques, original insofar as new methods (e.g. discriminator rejection) had to be incorporated to make it work. We think that, on the strength of extending IRL techniques to multi-task contexts, where the task is specified by language, is new enough to warrant publication, and hope you will agree.\n\n(see part 2 for continuation)\n", "With apologies for the length of our response, we hope this further clarifies things and shows the positioning not just of our work, but of IRL-related methods in general (against which the objections you raise here would also be levelled) with regard to the sparse reward problem and reward shaping. These are fundamentally different methods, that address different and orthogonal problems against the goal of teaching agents to act appropriately within the context of a task. If this discussion of the distinction satisfies you, we hope you will consider revising your score. If not, please let us know where our definitions continue to diverge.\n\nWe would like to conclude on the following conciliatory note: while we believe that your suggestions are based on a conflation of the spare reward problem with the case where reward functions are not available, your remarks do seem to highlight that the methods used to model reward in the latter case might be applicable to the former. This is not an unreasonable research direction, and we have had the opportunity to reflect on it, inspired by our discussion. Notwithstanding its merit, this research direction is separate to both the framing and motivation of this paper, and is not explored within the experiments. We could mention this within the further work section of the paper, but it would feel like taking credit for an idea that came out of this discussion, so our preference would be to not do so. Let us know how you feel about this proposed analysis and resolution.\n", "> reward function is what we use to REINFORCE the RL agent---it sometimes is the ground-truth (esp. in cases where the ground-truth reward is rich and dense), but sometimes has to be specified (esp. in cases of sparsity.)\n\n*Reward* is what we use to optimise agents using RL algorithms. As stated previously/above, RL algorithms are intrinsically agnostic to where this reward comes from. When a reward function is implemented within the environment, the reward it provides can be used to train an agent. AGILE, GAIL, and other IRL methods are all about *learning* this reward function from auxiliary data, in the absence of a reward function implemented in the environment.\n\n> The ground truth reward is indeed 1-0 (depending on if goal state is reached) in all your cases because you eventually evaluate the RL agent on success rate---it is implicit, but it exists!\n\nWe refer you to our comments about the experimental set up above. The fact that a reward function exists in GridLU-Relations and is used for automated evaluation and the training of traditional RL baselines has not bearing on the nature of AGILE (or any other reward modelling approach). The reward signal it provides is not used, and for all intents and purposes does not exist. It’s sole purpose is to compare methods which do not rely on the provision of an environmental reward function to those which do, with the clear caveat that the latter are not applicable in as wide a selection of settings the former.\n\n> (In more details, when you write ``the agent made the correct arrangement in 58% of the episodes’’, you indeed implicitly assign 1 to the correct arrangement but 0 to others, right?)\n\nAt test time, we explicitly and manually assign 1 to the correct arrangement and 0 to others. This is only for evaluation. The parameters of the model are not updated on the basis of these runs. To be clear: the policy networks require millions of episodes to learn a decent policy (as is the case across DeepRL) and it is not realistic to solicit human evaluation for each of them.\n\n> But in your settings, there is *no (good) reward functions (except the ground-truth one)* available, and you do not want to manually craft one\n\nWith the risk of repeating ourselves with regard to the experimental protocol, for AGILE there is no ground truth reward function. Only for the purpose of evaluating “traditional RL” baselines, for comparison of the final policies (not the parameter inference procedure!), is there a “ground truth” reward function, but it is not used by or exposed to the reward model or policy within AGILE. As an aside, the need for reward models is also motivated by the case where you *can’t* write a reward function (e.g. imagine formally verifying the completion of ambiguous or underspecified instructions like “clean the room” or “set up the table” against a rich environment), not just the case where you don’t want to.\n\n> you design a method to automatically learn a pseudo/surrogate/fake-reward function.\n\nThink of it this way: we want agents to maximise the reward a human would give them if they were observing the agent all the time during training (let’s assume this is impossible, because the agent needs millions of episodes). Typically, reward functions are implemented by humans to “simulate” having this human present during training: they are designed to align with what human judgements would be. Now, if this reward function hasn’t been implemented, what can we do? IRL methods such as GAIL, AGILE, etc are techniques which, based on some evidence from an expert, attempt to “reverse engineer” what the reward function expressing alignment with human values would be. So it’s not about learning a “pseudo/surrogate/fake-reward function” so much as it is about trying to learn what the “true” reward function could be.\n\n> TLDR: Sparse ground truth, no (other) reward function. Does it make sense to you?\n\nUnfortunately not, as our experiments for AGILE are about “*No* ground truth, no (other) reward function” (except the one which we are learning).\n", "Now, let us turn specifically to your comments.\n\n> every time a *reward* is mentioned in the paper, it means different things---sometimes *ground truth reward* but sometimes *reward functions*.\n\nReward functions are functions which take states (and, in our case, also instructions) and return a scalar reward for the associated timestep where the state was observed. In an environment like Atari games or Go, it is defined within the game logic or the rules of the board game. In “no-reward” scenarios, it has not been implemented, and must be learned by exploiting some auxiliary data (trajectories for GAIL, human preferences in papers like arXiv:1706.03741, or <instruction, goal state example> pairs in AGILE). In contrast, “reward” means “whatever scalar value we are maximising the expected value thereof by changing the agent’s parameters”. It is a scalar value (possibly just 0) provided to the agent at each timestep. The RL algorithm we use to optimise the agent for both the RL baseline (which uses environment reward in experiments where it is available) and the agent in AGILE (which does not, and therefore is applicable in environments/tasks where no reward function exists) is agnostic to the source of the reward. We discuss this at the end of the 2nd paragraph of section 2 (“We note that Equation 1 differs from a traditional RL [...] the reward model to the environment.”).\n\n> Ground truth reward (term borrowed from your paper and rebuttal) is what we eventually evaluate your RL agent on\n\nThis is perhaps where the source of confusion comes from. In order to compare policies trained using AGILE (a framework meant to be used when there is *no reward* provided by the environment during training) to policies trained under the “idealised” case where an exact reward function exists in the environment (and provided reward), we implemented, in one of our tasks (GridLU-Relations) a reward function, but only use it to train the A3C and A3C-RP baselines, and pretend that it does not exist when training with AGILE. You are correct that we used this “ground-truth” reward function to automate evaluation in our experiments, but this is purely for convenience: this evaluation does not in anyway play a role in the updating of agent parameters, reward model parameters, or any other part of AGILE. For all intents and purposes, we could have done this evaluation manually, using a human, (as was done for AGILE-Arrangements) and the results would be exactly the same.\n\n> it always exists (I will shortly claim what ground truth reward I think is in your cases, including GridLU-Arrangements)\n\nThis is true of any environment: the objective reward “exists” in that, for a particular task, a human (or other expert) could observe the environments and provide judgements which, if sufficient in number, could be used to train an agent against an RL objective. As we discussed when contrasting “no reward” scenarios to “sparse reward” scenarios, the possibility of occasionally soliciting reward signal from humans and directly optimising the expected value is broadly intractable for all but the most trivial tasks, environments, and agents. When we talk about reward “not existing”, we simply mean that there is no reward function implemented as a program in the environment which would provide reward signal (sparse or otherwise) which would permit tractable optimisation of an agent.\n", "We thank you for your rapid response, and appreciate your being so willing to engage in the discussion. We say this with the deepest respect: there seems to be some fairly fundamental misunderstanding of AGILE (and reward modelling in general) at play here, but we are ready to accept this my in part be due to how we have explained things. We would like, here, to help clarify this misunderstanding, not only with the intent of convincing you that the comparison to related literature made in the paper is fair and (to the extent it can be with the space allowed) fairly complete, but also with the intent of tweaking the description of the method where necessary so that other readers may not arrive at the same conclusions as you.\n\nBefore responding to your latest comments, allow us to state what we understand to be the “sparse reward problem”: in some settings/tasks/environments, the set of states within an episode where the agent receive a scalar reward signal is a very small proportion of the total states experienced (e.g. just the last state). This makes training an agent difficult, since a particularly complex credit assignment problem, with possibly long-range and structured dependencies between rewards and actions, needs to be solved. To this end, techniques such as reward shaping or reward prediction have proposed to give a denser “signal” based on which the agent can effectively learn to maximise the expected reward, in spite of the sparsity of the “true” reward signal.\n\nNow consider the cases where there is, quite literally, no reward function implemented in the simulator or environment. This means you *cannot* normally train an RL agent. Of course, a human *could* observe an agent operating in such environments, and assign reward but this fairly obviously would not scale to the requirements of any non-trivial agent and environment (even for classical tabular RL approaches). So in that sense, the IRL literature, GAIL, or the present method (AGILE) are not typically seen as addressing a sparse reward problem, but rather exploiting a small amount of expert data in order to extrapolate the reward function that an expert could conceptually have provided to the environment, but didn’t because it was either impossible (because there is no formalisable verification function over the desired task due to ambiguity or underspecification), or intractable (because implementing such a function would be too onerous). Finally, and perhaps most crucially with regard to the orthogonality between no-reward scenarios and sparse-reward scenarios: the reward provided by a reward model in a no-reward scenario might itself be sparse (and therefore require reward shaping techniques to be applied for the agent to learn a policy).\n", "\nThanks to the authors for such a clear and detailed clarification---it indeed helps. \n\nBut before I make a decision on re-evaluation, we need to further clarify something important. \nI think the most misleading thing is: every time a *reward* is mentioned in the paper, it means different things---sometimes *ground truth reward* but sometimes *reward functions*. \n\nLet's first make a clear distinction between them (even though in many other cases we do not have to, I think it is particularly necessary in this paper). Ground truth reward (term borrowed from your paper and rebuttal) is what we eventually evaluate your RL agent on---so no matter how sparse it is, it always exists (I will shortly claim what ground truth reward I think is in your cases, including GridLU-Arrangements); reward function is what we use to REINFORCE the RL agent---it sometimes is the ground-truth (esp. in cases where the ground-truth reward is rich and dense), but sometimes has to be specified (esp. in cases of sparsity.). \n\nWe describe your settings differently---I call it *sparse reward* and you call it *no-reward*. I think they are both correct---but in different senses. The ground truth reward is indeed 1-0 (depending on if goal state is reached) in all your cases because you eventually evaluate the RL agent on success rate---it is implicit, but it exists! (In more details, when you write ``the agent made the correct arrangement in 58% of the episodes’’, you indeed implicitly assign 1 to the correct arrangement but 0 to others, right?) But in your settings, there is *no (good) reward functions (except the ground-truth one)* available, and you do not want to manually craft one---then you design a method to automatically learn a pseudo/surrogate/fake-reward function. TLDR: Sparse ground truth, no (other) reward function. Does it make sense to you? \n\nThe connection to reward-shaping might have been more obvious if we agree on the points above: 1) both reward-shaping and your method does not use the ground-truth reward as the reward function; 2) both reward-shaping and your method early-reward agents for reaching some states that are closer (in any appropriate sense) to the goal states. To elaborate 2), I quote Ng et al 1999 here: ``to encourage moving towards a goal, a shaping-reward function that one might choose is F(s,a,s’)=r whenever s’ is closer (in whatever appropriate sense) to the goal than s, and F(s,a,s’)=0 otherwise, where r is some positive reward’’. Does it sound similar to your definition of \\hat{r} on page-2? In more details, you learn a neural function to identify goal-state, which will turn out > 0.5 when the current state (representation) is close/similar enough to the goal state, so you give RL agent a positive reward at this step. Is this argument right? \n\nThere is indeed a difference between (potential-based) reward-shaping and your method. You may lose the optimality (which means: using the pseudo/surrogate/fake-reward function, the optimal policy you obtain will still give you maximal ground-truth 1-0 reward, i.e. your success rate)---many guarantees are missing when people use flexible neural components, so I am not criticizing it. Your method empirically works well in practice---that is good. But the connection needs to be carefully discussed. Or maybe you can even carefully craft your learned reward function such that your method can preserve optimality---you do not have to do this in this paper and rebuttal, but it might be future direction. \n\nDoes this response clarify my points? \n\nAfter all, your rebuttal is indeed helpful, because it (indirectly though) clarifies my concern of this paragraph ``D is only trained on the final states of episodes … only designed for 1/0-reward cases?’’ Let me answer my own question here: the authors consider the (language-instructed-)navigation-type tasks where the RL agent learns to achieve goal states (in whatever sense). In such cases, the equivalent ground truth reward is always 1-0, so the way of pseudo-reward being related to goal-state-discriminator seems general enough. The goal-state-discriminator is only trained on final states, but can be deployed on all states, because its job is to find the states ``that are close enough to the goal’’.\n\nIn the end, as I claim in my review, the work is well-motivated and neat. But it needs clarification on some seemingly subtle but important points, and it should be more tightly connected to past related work. ", "Finally, to answer your questions at the end of the review:\n\n1) The primary stability issues that emerge when training have to do with the reward model decaying when the policy starts performing well, which lead to our false negative elimination method introduced in “Dealing with False Negatives”, which lead to stable training. Despite the “non-stationary” nature of the reward model, there was no need to manually change the learning rate during training (although note that we used RMSProp optimisers for both the policy and reward model, which adjusts the learning rate over time as a function of historical gradient norm).\n\n2) In AGILE, as is the case for GANs and GAN-inspired methods such as GAIL, drawing the negative examples from the “generative model” (the policy), is crucial for obtaining a discriminator which is tailored to evaluating the policy against the reference goal states. For any non-trivial problem with a reasonably large state space, getting negative examples from a random policy would cover very little of the state space (e.g. it’s unlikely to provide examples of the agent holding an object), so coverage cannot be expected to be very good.\n\n3) We tuned rho via grid-search, as with other hyperparameters. Whereby each instruction corresponds to a specific task, we used a global rho for the entire task set (e.g. GridLU-Relations). We do not have theoretical guarantees to offer, but empirical study shows that training is more stable for low values of rho (trading off longer training for better final results).\n\n4) When holding out 10% of the instructions, these were randomly chosen from the space of possible instructions.\n\nThank you for these questions. We will try to make sure the answers to them are clearer just from reading the paper, in our revised draft.\n", "We thank Reviewer 1 for their review and statement of support for the paper’s technical contributions. We hope, during this discussion period, to get a better understanding of your concerns and hopefully address them, making clarifications in the paper where needed.\n\nFirst, could you please clarify your first concern that “the proposed approach is a simple combination of A3C and the NMN architecture”? While this is an accurate portrayal of some of our key results, we stress that:\n\n1) we also report results for LSTM + FiLM-ConvNet architectures for both policy and reward model networks, to showcase performance within AGILE when the syntax of the language is not given (see “AGILE with Structure-Agnostic Models” in Section 3).\n\n2) use of A3C for our policy network is not an essential part of AGILE. AGILE is a general framework for jointly training policies and reward models, both conditioned on language instructions. Any RL method and network architecture can be used, since the only difference between “traditional” RL and AGILE is the source of the reward: in the former case, it comes from the environment, and in the latter, from the jointly learned reward model. As such, for both the baseline and the AGILE-based model, the fact that we used A3C is not important; all that is important is that we use the same RL algorithm and architecture for both the baseline and AGILE.\n\nWith this in mind, do you still believe this aspect of our evaluation is cause for concern?\n\nSecond, you suggest experiments which are more visually realistic with more complex relations. We agree that this is where this research should be going, and discuss such further work in section 5, along with the particular challenges we anticipate it will bring. We would be remiss not to point out two things, with regard to our current experiments:\n\n1) While simple, the environment and tasks are actually quite diverse and approached without the simplifying assumptions typically seen in grid worlds:\n a) There are over 1,000 instructions in GridLU-Relations, each specifying a different task with millions of different initial environment states to solve.\n b) The environment is observed, by both policy and reward model, at the pixel level, without predefined notions of what and where objects and their boundaries are, or “privileged features” indicating which predicates apply to which objects (or even which predicates exist).\n\n2) As we see in the left plot of Figure 3, the A3C baseline results show that, due to the two points made above, even when the reward is specified by the environment (for our baselines), this is quite a difficult multi-task RL problem for fairly modern agent architectures and RL algorithms. This is because it may take over a dozen steps to optimally obtain a goal state, the task is changing every episode, and reward is sparse. When you add to that the need to jointly infer the reward function from a limited number of expert examples, this constitutes a fairly significant machine learning problem.\n\nWhile we agree that there are no guarantees with regard to scalability to more realistic environments “out of the box”, we hope you will agree this makes a substantial contribution by showing that it is possible to obtain agents which align with expert notions of reward through the proxy of learned reward functions, from examples, and that it is reasonable to leave further developments which scale to more complex environments for further research.", "The paper presents an approach for simultaneously learning policies and reward functions for reaching goals that are described by an instruction providing spatial relations among objects. The proposed platform, called Adversarial Goal-Induced Learning from Examples (AGILE), is composed of an off-the-shelf RL module like A3C and a separate module for learning a reward function, implemented using the NMN paradigm. The RL module is trained using the reward function learned by the reward module. The reward module is trained to map a given <instruction, state> into a score between 0 and 1 depending on how well the provided state satisfies the instructions provided in the instruction. The returned score is used as a reward function. The training of the reward function is performed by using a dataset of positive examples, and using the states visited by the agent while it's learning as negative examples. To account for the fact that the agent becomes better over time and its visited states can no longer be used as negative examples, the authors proposed a heuristic where the states visited by the agent are not all used as negative examples, but only those that have the lowest scores.\nThe paper also presents an empirical evaluation of the proposed approach on a synthetic task where the agent is tasked with move bocks of different shapes and colors to a desired final configuration. The AGILE approach was compared to the baseline A3C algorithm where a sparse binary reward signal was used only whenever the agent reaches the goal state. AGILE is also compared to A3C with an auxiliary task of reward prediction. \nThe paper is clearly written and technically strong. However, I have two issues with this paper: 1) the proposed approach is a simple combination of A3C and the NMN architecture, 2) the experiments are performed on simple synthetic tasks that make learning spatial relations fairly easy, I would love to see more real images as it has been demonstrated in prior works on learning spatial relations. It is not clear from these experiments if the proposed approach will scale up to higher-dimensional inputs. Moreover, there are several stability issues that can be caused by the proposed approach. For instance, the reward function is changing over time, how does that affect the learning rate? Also, instead of using the learned policy itself to generate negative examples and run into non IID data, instabilities, and increasingly good negative examples, why not use a fixed dataset of negative examples generated with a random policy? It would be interesting to do perform an experiment where you compare to the classical reward learning setup where you simply provided labeled positive and negative examples and classify them offline, then use the learned reward function online for RL. \nHow did you tune the hyper-parameter \\rho (percentage of negative examples to discard) for specific tasks? Do you have any guarantees for this approach?\nIn the generalization experiments, it is mentioned that 10% of the instructions are held out. Are these 10% randomized?", "We thank Reviewer 2 for their kind words and detailed review, and for clearly stating what they believe is the limitation of the paper they would like to see addressed. With all due respect, we would be happy to discuss the relation of this work to the sparse reward problem, and to that of learning shaped rewards, but we believe this recommendation stems from a misunderstanding which we hope to clarify through this discussion period.\n\nSimply put, the primary use case of frameworks like AGILE specifically, and of reward modelling / inverse reinforcement learning in general, is where there is *no* reward function implemented (or even obtainable), rather than a sparse one. In such cases, we must learn a reward function from expert-provided information (trajectories, goal states). In contrast, from our understanding of reward shaping in the context of the reward sparsity problem, the ground truth reward is accessible in order to update the policy (or Q/V functions), and a shaped reward function is learned or defined to give a more dense “fake” reward (while preserving optimality guarantees) and make credit assignment easier.\n\nWe understand that the source of this confusion may come from the fact that in one of our experiments (GridLU-Relations), the ground truth reward *is* implemented and *is* sparse, but this is only for the purpose of automated evaluation and the training of baselines for comparison. When training agents with AGILE, or in the other experiments (GridLU-Arrangements) there is no reward provided from the environment.\n\nPerhaps we have misunderstood the point the reviewer is making, in which case we hope they can clarify how reward shaping and the sparse reward problem relate to the no-reward scenario in which our work and other work in inverse RL is situated. However, on the assumption that the reviewer has misunderstood the motivation and problem setting for our approach, we would be grateful if they could re-evaluate their assessment in this light, and/or perhaps let us know where would could have been clearer so as to not potentially confuse future readers along similar lines.\n" ]
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1 ]
[ "iclr_2019_H1xsSjC9Ym", "BkgNO00FCm", "iclr_2019_H1xsSjC9Ym", "r1emqB4Gp7", "BygfD88v3Q", "H1gboYgu2Q", "SJlxrIwc6m", "SklRX9tw67", "HkgsOyrm67", "rylCieSMp7", "HkextZ7MaQ", "H1llub7G6m", "Bkehr-QM67", "BygfD88v3Q", "BygfD88v3Q", "H1lXVonl6m", "H1lXVonl6m", "H1lXVonl6m", "H1lXVonl6m", "rygJ2iPkpQ", "rklL8IgSn7", "rklL8IgSn7", "iclr_2019_H1xsSjC9Ym", "H1gboYgu2Q" ]
iclr_2019_H1xwNhCcYm
Do Deep Generative Models Know What They Don't Know?
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.
accepted-poster-papers
This paper makes the intriguing observation that a density model trained on CIFAR10 has higher likelihood on SVHN than CIFAR10, i.e., it assigns higher probability to inputs that are out of the training distribution. This phenomenon is also shown to occur for several other dataset pairs. This finding is surprising and interesting, and the exposition is generally clear. The authors provide empirical and theoretical analysis, although based on rather strong assumptions. Overall, there's consensus among the reviewers that the paper would make a valuable contribution to the proceedings, and should therefore be accepted for publication.
train
[ "HJl0-fqLeV", "BklXQj-R1V", "rkxi_sW8kN", "B1eudmtWkN", "HJlpx-we1E", "rJxBdmxxyV", "HJxL_ysjC7", "BJgniGai0m", "HylTaY-jAm", "SyedvzM9Am", "rygFbvpYRX", "Skgkfk6tRm", "ByxeVRhY0m", "BygHqa2F0X", "SygxWa3Y0m", "ByloTnnFRQ", "HkgLWfveT7", "r1eWc6qjnX", "BJe__C5d3Q", "HJx8SCQEn7", "BkgNd7oT3m", "BJl6am8z37", "ByefSPJRsm" ]
[ "author", "public", "author", "author", "official_reviewer", "author", "public", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public" ]
[ "Thanks for your comment and question.\n\nPer the reviewers' requests for more evidence of the phenomenon on additional data sets, we wanted to bolster the 'motivating observations' section with experiments that better exhibit the curious out-of-distribution behavior. We found that the FashionMNIST-vs-MNIST pair illustrated the phenomenon better (i.e. larger BPD gap) than the NotMNIST-vs-MNIST pair and hence we replaced those results in the main text. We will also add the corresponding plots to Appendix B showing the asymmetric behavior for this pair as well (due to time constraints, we couldn't update all of the figures in the Appendix during the rebuttal period). This was the only reason for the switch. If you think the NotMNIST-vs-MNIST experiment is more interesting for some other reason, please do let us know your thoughts. \n\nWe wouldn't claim the asymmetry \"solves\" the issue since (i) even for models trained on SVHN, there could be other datasets that lead to higher likelihood and (ii) it does not immediately reveal a procedure to correct the CIFAR10-vs-SVHN (or similar) issue. The second-order analysis in Section 5 is still our best explanation for the asymmetric behavior. That is, the interaction between the model curvature and the data set variance leads to the phenomenon, and when the sign of the difference in variances is switched (which occurs when the train and OOD sets are switched), then we expect the phenomenon behavior to flip as well. ", "could you please explain why the notmnist results were removed in the latest draft? I found these did illustrate well the issue this paper is trying to get across, albeit the asymmetric behaviour as reported in the appendix -- also, while on this, I'm also surprised that the official reviewers didn't ask more about this. Could you provide some thoughts on why reversing the train/test roles of data sets solves the pathological high test-likelihood issue? Thanks!", "(Apologies for late response, we missed this earlier)\n\nThanks for pointing us to your work. We will incorporate it into our discussion of related work.\n\n\n", "Thank you for these suggestions, Reviewer #3. We probably won't be able to add them in the next week---as many of us authors are traveling to / attending NeurIPS---but we will add them to the next iteration of the draft.", "And thank you for revising the text. My main concerns are addressed, and the issue #5 is pretty minor given the other assumption made in the analysis.\n\nI am not a statistics expert, if one wants to test whether two univariate Gaussians have different means or not, a student-t test can be used. In this case of multivariate Gaussians, a brief search suggests using its generalization, \"Hotelling's two-sample t-squared statistics/test\". In the end, one wants to compare the distance (considering different dimensions have different correlations, the Mahalanobis distance is better) between the two means, and compare its scale to the covariance matrices of both Gaussians.\n\nA rougher test is see if one Gaussian's mean lies inside the confidence interval of the other Gaussian. See multivariate normal distribution's confidence interval.\n\nIn the case that the tests fail, one can see how much the test statistics are larger than e.g. the 95% quantile of the corresponding test distributions.", "Thank you for your responses and continuing the discussion, Reviewer #3. Our replies are below.\n\n2. \"All I am asking is that the paper warns its readers of this shortcoming at the beginning of the analysis.\":\n\nFair point. We will add a sentence at the beginning of Section 5 to make explicit that these expressions are approximations. \n\n\n4. We perfectly agree with your 'better description': \"one of the terms encourages the sensitivity....But we tried and it's not working.\" This is exactly what we wanted to convey in the draft, and we thought we clarified this point in our rebuttal by saying \"Our point is made in the context of volume term which is only one of the terms in the change-of-variable objective.\" We'll revise the draft to further emphasize our remarks pertain to the volume term only.\n\n\n5. \"...making it 150 which is huge (actual value is probably smaller)\"\n\nThe difference is certainly much smaller. It would be 150 only if the histograms were perfectly separated to each end of the x-axis in Figure 6 (a) of the original draft, which is not the case at all. What metric / plot would convince you? Some statistic of the dimension-wise means? ", "> 1. (Also AREA CHAIR NOTE): Another parallel submission to ICLR titled “Generative Ensembles for Robust Anomaly Detection” makes similar observations and seemed to suggest that ensembling can help counter the observed CIFAR/SVHN phenomena unlike what we see in Figure 10. \n\nThe parallel submission called Deep Anomaly Detection with Outlier Exposure also makes the observation that SVHN examples have higher likelihood than CIFAR-10 examples, and they also propose a way to correct this behavior. This is in Section 4.4 of https://openreview.net/pdf?id=HyxCxhRcY7\nThe results also suggest that SVHN results are one of the worst-cases for density estimators; density estimators are not as bad on many other datasets.", "Thank you for your response. The extra results are promising, which makes the paper quite stronger. Other questions are addressed well. Now I am mainly focused on these three issues:\n\n2. Second order analysis, but only on the *sign* of the *difference* of two pdfs\n\nI would think that since x is an image, it would be hard to approximate a distribution with a mixture of a thousand Gaussians, let alone one Gaussian. Even if you are taking the difference of two pdfs, and taking the sign of the difference, a Gaussian would give you a hypersphere, not a large amounts of irregular-shaped blobs scattered through the image space.\n\nIt IS indeed inevitable that when theoretically analyzing deep networks, we have to start somewhere easy, and log-quadratic pdfs are a valid starting point. All I am asking is that the paper warns its readers of this shortcoming at the beginning of the analysis.\n\n4. Loss actively increasing volume term unlike prior work\n\nIt does seem that way, but by the same argument I can claim that any loss function function has a L2 component in it: if your loss is f(theta), then you just write f(theta) = g(theta) + |theta|_2^2, where g(theta) = f(theta) - L2. My bold claim only makes sense if in fact all terms in g(theta) collectively does not do much on the L2. Unfortunately this is not the case in this paper. \n\nSpecifically in this paper, the latent density term is the happiest if you make f nearly degenerate (everything maps to a tiny proximity of argmax_z{ p(z) }, for example), making the volume term nearly zero. And the volume term is needed to change this into something meaningful. The two terms strike a balance. So it is not right to claim f(x) encourages sensitivity if one term encourages it and another discourages it. -- Especially considering the experiment fixing the volume term did not make SVHN and CIFAR closer. A better way to describe this story is can be along the lines of \"one of the terms encourages the sensitivity (but the other discourages it), and that term makes SVHN likelihood pretty high, so one may think this is the issue. But we tried and it's not working\".\n\n5. Are SVHN and CIFAR centers close?\n\n*Individually*, each dimension of the means is quite close, but remember that two mean vectors are close only if *everything* is close. These are I assume 32x32=1024 feature space, so you would amplify the estimated 0.15 by 1024, making it 150 which is huge (actual value is probably smaller). Since this is used for the difference of two distributions approximated by log-quadratics, one should see the drop of the approximated density function when you move as far as to the mean of the other distribution. I am not convinced that it is small.\n\n", "No, the BPD never becomes lower for CIFAR-10 than for SVHN under any setting of the training time, optimization strategy, regularization type / strength, and model size that we tried. It depends on what you mean by ’not complex enough.’ We achieve sampling and BDP numbers on par with SOTA so we don’t think that the explanation is simply to use a bigger model. In fact, the Glow model trained by the authors of “Generative Models for Robust Anomaly Detection” (https://openreview.net/forum?id=B1e8CsRctX) is as large as Kingma & Dhariwal's (2018), and they report the same phenomenon. If by ’not complex enough’ you mean that Glow could possibly be generally improved to better represent the training density, then sure, perhaps some innovation applied to Glow could make the model richer and fix the issue. We do not believe such an innovation is trivial though, given how persistent the phenomenon is across hyperparameters and when ensembling (Appendix F).", "\nFrom Figure 4 d), we see that, due to the inductive bias of the model, SVHN has lower bpd. \nIf the model were trained further, would the bpd of the training set ever become lower than SVHN test? \n\nIf yes, then doesn't this indicate that, due to early stopping, the models are underfitting the CIFAR test set? In other words, generalizing density estimation from CIFAR training set to CIFAR test set is challenging and thus the models underfit the CIFAR test set, resulting in the simpler dataset (SVHN) having higher likelihood due to the inductive bias of the model. So possibly, given more data or a better inductive bias, this problem would go away? \nIf no, then it seems that the model is not complex enough since it is unable to obtain a lower bpd on CIFAR train compared to SVHN. \n\nHave you tested this? What are your thoughts? ", "We have uploaded a revised draft in which we have attempted to incorporate the reviewers’ suggestions. In particular, the new draft includes the following significant revisions:\n\n1. Additional Data Sets: In Section 3 we now report results for Glow trained and tested on the following data sets (in addition to CIFAR-10 vs SVHN): FashionMNIST (train) vs MNIST (test), CelebA (train) vs SVHN (test), ImageNet (train) vs CIFAR10/CIFAR100/SVHN (test). The phenomenon of interest (i.e. higher likelihood on out-of-distribution test data) is observed for all of these new pairs. Furthermore, we include the empirical means and variances of these data sets in the analysis in Section 5 and show that they agree with our original draft’s conclusions.\n\n2. Related Work: We discuss the Škvára et al. (2018) work (and other concurrent work) in Section 6, as suggested by Reviewer #3. \n\n3. Equation Spacing: We fix the spacing issue mentioned by Reviewer #1.\n\n4. Revised Plot of Empirical Means: Reviewer #3 had doubts about to what degree the data set means overlap. We believe this doubt was due to the range of the x-axis in what was formerly Figure 6 (a)---now Figure 5 (a). We have revised the figure to have range 0-255 (normalized to 0-1) and added the additional data sets. \n\n5. Removal of NotMNIST results: We have removed from the main text the NotMNIST vs MNIST experiment that was reported in the original draft. However, the Appendix (most crucially Figures 8 and 13) still contains NotMNIST results and has not yet been updated with the new data sets. We will fix this in the next draft. ", "Thanks again, Reviewer #3, for your thought-provoking critique. We respond to your other comments below. \n\n1. “In particular, Section 4 is a series of empirical analyses, based on one dataset pair….However, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis.” \n\nSee general responses #1 and #3.\n\n\n2. “It is good that Section 5 has some theoretical analysis. But I personally find it very disturbing to base it on a 2nd order approximation of a probability density function of images when modeling something as intricate as models that generate images. At least this limitation should be pointed out in the paper….Section 5 is based on a 2nd order expansion on the $log p(x)$ given by a deep network -- I shouldn't be the judge of this, but from a realistic perspective this does not mean much.”\n\nSee general response #2. We emphasize that we are not trying to approximate the density function, only approximate the difference and characterize its sign. Moreover, the special structure of CV-Glow makes these derivative-based approximations better behaved and more tractable than an expansion of a generic deep neural network.\n\n\n3. “Some parts of the paper feel long-winded and aimless….In general, the paper is clear and easy to understand given enough reading time, but feels at times long-winded. Section 2 background takes too much space. Section 3 too much redundancy -- it just explains that SVHN has a higher likelihood when trained on CIFAR, and a few variations of the same experiment.”\n\nWe will attempt to make the writing more concise. But we believe that most, if not all, of Section 2 is necessary in order to make the paper self-contained and accessible to someone who has never before seen invertible generative models. While we are fastidious in our experimental description in Section 3, we think it is necessary since this is the foundational section of the paper.\n\n\n4. “I don't think Glow necessarily is encouraged to increase sensitivity to perturbations. The bijection needs to map training images to a high-density region of the Gaussian, and that aspect would make the model think twice before making the volume term too large.”\n\nWe are not saying that the model will totally disregard the latent density and attempt to scale the input to very large or infinite values. Our point is made in the context of volume term which is only one of the terms in the change-of-variable objective. The log volume term in the change-of-variable objective is maximizing the very quantity (the Jacobian’s diagonal terms) that the cited work on derivative-based regularization penalties has sought to minimize. The maximization of the derivatives in the objective directly implies increased sensitivity to perturbations.\n\n\n5. “Figure 6(a) [Figure 5(a) in revised draft] clearly suggests that the data mean for SVHN and CIFAR are very different, instead of similar.”\n\nWe are not sure how you are drawing this conclusion; perhaps from the scale of the x-axis? The histogram in Figure 6 (a) (original draft) has an x-axis covering the interval [0.4, 0.55], meaning the maximal difference between a mean in *any pair of dimensions* is 0.15. Scaling back to pixel units, 0.15 * 255 = 38.25, meaning that 38.25 pixels is the maximum difference in means. While this is not a difference of zero, we don’t see how you could say this “clearly suggests” that the means are “very different.” In the latest draft, this figure---now Fig 5 (a)---has an x-axis that spans from 0-255. Hopefully the overlap in the means in now conspicuous. \n\n\n6. “However, there are papers empirically analyzing novelty detection using generative model -- should analyze or at least cite: Vít Škvára et al. Are generative deep models for novelty detection truly better? at first glance, their AUROC is never under 0.5, indicating that this phenomenon did not appear in their experiments although a lot of inlier-novelty pairs are tried.”\n\nThank you for pointing us to this work. We cite it in the revised draft. It looks like they test on UCI data sets of dimensionality less than 200, and therefore their results speak to a much different data regime than the one we are studying.\n\n\n7. “A part of the paper's contribution (section 5 conclusion) seem to overlap with others' work. The section concludes that if the second dataset has small variances, it will get higher likelihood. But this is too similar to the cited findings on page 6 (models assign high likelihood to constant images).”\n\nWhile we do also analyze constant images, we believe that our results for multiple data set pairs (FashionMNIST-MNIST, CIFAR10-SVHN, CelebA-SVHN, ImageNet-CIFAR10/CIFAR100/SVHN) and for multiple deep generative models (flow-based models, VAE, PixelCNN) is novel. Our conclusions are arrived at through focused experimentation and a novel analytical expression applied to CV-Glow. ", "Thanks again, Reviewer #2, for your insightful feedback. We respond to your other comments below. \n\n1. “Why investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types.” \n\nSee general response #3.\n\n\n2. “For instance, the experiments seem to indicate that generalizing density estimation from CIFAR training set to CIFAR test set is likely challenging and thus the models underfit the true data distribution, resulting in the simpler dataset (SVHN) having higher likelihood.“\n\nWe do not believe our models are necessarily underfit. In fact, we found that Glow had a tendency to *overfit,* and that one must carefully set Glow’s l2 penalty and choose its scale parametrization (exp vs sigmoid, see Appendix D) in order to prevent it from doing so. We thought this overfitting to the training data could be a reason for the phenomenon and therefore we tuned our implementations to have reasonable generalization. \n\n\n3. “It would have been nice if this paper explored more than just MNIST vs NotMNIST and SVHN vs CIFAR10, so that the readers can gain a better feel for when generative models will be able to detect outliers. For instance, a scenario where the data statistics (pixel means and variances) are nearly equivalent for both datasets would be interesting.”\n\nSee general response #1 in regards to data sets and additional results. Thank you for the suggestion of looking at data sets with similar statistics. We do this, in a way, with our second order analysis and the ‘gray-ing’ experiment in Figure 5 (b) (formerly Figure 6 (b) in the original draft). Gray CIFAR-10 (blue dotted line) nearly overlaps with original SVHN (red solid line) in terms of their log p(x) evaluations. Figure 12 (formerly Figure 13) then shows the latent (empirical) distribution of the gray images, and we see that the gray CIFAR-10 latent variables nearly overlap with the SVHN latent variables. This is to be expected though, given the overlapping p(x) histograms, since the probability assigned by CV-Glow (in comparison to other inputs) is fully determined by the position in latent space.\n\n4. “The second order analysis is good but it seems to come down to just a measure of the empirical variances of the datasets.” \n\nSee general response #2.", "Thanks again, Reviewer #1, for your thoughtful comments. We respond to your other comments below. \n\n1. “It seems like one could detect most SVHN samples just by the virtue that there likelihoods are much higher than even the max threshold determined by the CIFAR-train histogram?”\n\nThis is an interesting idea, but we are not sure it is applicable. If one looks closely at Figure 2 (b), there are still blue and black histogram bars (denoting CIFAR-10 train and test instances) covering the entirety of SVHN’s support (red bars). \n\n\n2. “[The constant input]’s mean (=0 trivially) is clearly different from the means of the CIFAR-10 images (Figure 6a) so the second order analysis of Section 5 doesn’t seem applicable.”\n\nSee general response #2.\n\n\n3. “How much of this phenomena do you think is characteristic for images specifically? Would be interesting to test anomaly detection using deep generative models trained on modalities other than images.”\n\nWe have not tested non-image data, since images are the primary focus of work on generative models, but this is an interesting area for future work. \n\n\n4. “Samples from a CIFAR model look nothing like SVHN. This seems to call the validity of the anomalous into question. Curious what the authors have to say about this.”\n\nThis is a very good point. See our response to Shengyang Sun’s comment below. We see think this phenomenon has to do with concentration of measure and typical sets, but we do not yet have a rigorous explanation. \n\n\n5. “There seems to be some space crunching going on via Latex margin and spacing hacks that the authors should ideally avoid :)”\n\nWe have fixed the spacing in the latest draft :)", "3. Purpose / Direction of Section 4 [R2, R3]: R2 asks “Why investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types.” While the phenomenon is common to multiple deep generative model classes, as Figure 3 shows, we found it very hard to analyze all three models simultaneously, on equal footing, due to their different structures and inference requirements. For instance, how can we compare VAEs and PixelCNNs while controlling for the former’s approximate inference requirements? How do we know any problems with densities / outlier detection aren’t due to a sub-optimal inference model or the variational approximation? We thought we would make more headway by restricting the analysis to invertible models since they (i) admit exact likelihood calculations and (ii) have nice analytical properties stemming from the bijection constraint. Having made this decision, we then thought the next natural step is to look at both terms in the change-of-variables objective---the density under p(z) and the volume term---to see if one of these in particular was the cause. After seeing Figure 4 (c, d) (Figure 4 (a, b) in revised draft), we thought that the volume term is the culprit, which then lead to examination of constant-volume Glow (CV-Glow) (i.e. ‘constant volume’ across all inputs) as described on page 6. While the volume term was a bit of a red herring, we thought the progression from {VAE, PixelCNN, NVP-Glow} → {NVP-Glow} → {CV-Glow} was a logical way to further examine the problem for an increasing tractable model class. \n\nRelatedly, R3 writes of Section 4: “Section 4 is a series of empirical analyses, based on one dataset pair….However, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis….Section 4 seems to lack a high-level idea of what it want to prove -- the hypothesis around the volume term is dismissed shortly after, and it ultimately proves that we do not know what is the reason behind the high SVHN likelihood, making it look like a distracting side-experiment.” The purpose of focusing on just CIFAR-10 vs SVHN in Section 4 is to drill-down and isolate why the phenomenon is happening in this one particular case. We think this is an appropriate approach, as we didn’t want to introduce too many experimental variables, as explained above. Furthermore, the presence of this phenomenon for SVHN vs CIFAR-10 alone warrants investigation since those data sets are extremely popular in the ML community. Yet, we have since added additional data sets (see general response #1) and hope the reviewer is now satisfied with this additional evidence of the phenomenon's prevalence.", "Thank you, reviewers, for your fair and helpful comments. We’ve provided a general response below that addresses concerns common to multiple reviewers. We’ll also respond to reviewers individually regarding issues particular to their review.\n\n1. Limited Number of Data Sets [R2, R3]: We have now added additional results to Section 3 (Figures 1 and 2) showing that the phenomenon (higher likelihood on non-train data) occurs for FashionMNIST (train) vs MNIST (test), CelebA (train) vs SVHN (test), ImageNet (train) vs CIFAR10/CIFAR100/SVHN (test). Furthermore, we have included these data sets into our plot of the empirical means and variances (Section 5), showing that our second-order analysis and ‘sitting inside of’-conclusion agrees with these additional observations. \n\n2. Accuracy / Generality of Second-Order Analysis [R1, R2, R3]: All reviewers bring up questions about the second-order analysis. Starting with R1, they question how Equation 5 applies / can be interpreted for constant images. To slightly correct R1’s statement, the constant image with high likelihood under the SVHN-trained model is x=128. Normalizing by the number of pixels, i.e. 128/265=0.5, places this constant image almost in the exact center of the means plot in Figure 6 (a)---thus, the second-order analysis does apply. Then turning to Equation 5 and plugging in the variance Var[\\delta(128)]=0, we have:\n\nE_q [log p(x)] - E_p* [log p(x)] \\approx ½ * (negative number for CV-Glow) * (0 - Sigma_p*) >= 0.\n\nHence the second-order analysis still holds for the delta function located at 128 and agrees with the empirical result. We will add this derivation to the appendix. \n\nMoving on to R2, they state that the second-order analysis reduces to “just a measure of the empirical variances of the datasets.” This is true and was done so purposefully. CV-Glow is the only generative model that we know of that (i) has high-capacity and (ii) is amenable to the second-order analysis. For all other models mentioned (VAE, PixelCNN, NVP-Glow), the second-order equation depends on the second derivatives of the neural network w.r.t its input. It’s hard, if not impossible, to say anything general about how these second derivatives behave across the input space, let alone across re-fittings of the model. CV-Glow uniquely has second derivatives that simplify to a function of (i) the log-convexity of the latent distribution and (ii) the square of the 1x1 convolutional kernel’s parameters. Since both of these terms have a constant sign, the interesting part of the equation does indeed boil down to “a measure of the empirical variances of the datasets.” The complications introduced by the model have been taken out and what’s left is a function of the data statistics, which does allow for some general conclusions. We will try to clarify this reasoning / motivation in the paper, as space permits. Furthermore, the fact that our second-order analysis lead us to and agrees with the additional experiments (see general response #1) and the gray-ing attack (Figure 5 (b), formerly Figure 6 (b) in the original draft), we see this as evidence of its validity.\n\nLastly, we address R3’s comments that they “find it very disturbing to base [analysis] on a 2nd order approximation of a probability density function.” We agree that trying to approximate a neural-network-based density with only a second-order representation is a tall order. But this is not precisely what we are doing. Rather, we are approximating *the difference* in density functions, and therefore we only care about *the sign* of the expression. We believe the second-order expression is a useful representation for this. Moreover, if we assume the data distributions have no cross-moments, then from Equation 11 we notice that the diagonal derivatives are zero for second-order and beyond, thus making the second-order expansion exact. For these two reasons, we don’t believe our approximation is “disturbing.” And since we are working with deep generative models, any analytical statements will require rather strong assumptions. ", "Thank you for your comment, Shengyang. This is a good point and something we were a bit puzzled by as well. Our current hypothesis is that the SVHN samples do not fall within the model’s typical set. To elaborate, in high dimensions samples at or very near to the mode are unlikely. See the high-dimensional Gaussian example discussed here: https://www.inference.vc/high-dimensional-gaussian-distributions-are-soap-bubble/ While you are correct in that the variances in data space are not drastically different, the variances of each data set’s latent variables (Figure 12, top column, middle) are well separated, with SVHN’s variance being much smaller. Thus the distribution in latent space may be a better way to characterize the model’s typical set as samples are first drawn in latent space and then passed to the inverse function. \n", "I really enjoyed reading the paper! The exposition is clear with interesting observations, and most importantly, the authors walk the extra mile in doing a theoretical analysis of the observed phenomena.\n\nQuestions for the authors:\n1. (Also AREA CHAIR NOTE): Another parallel submission to ICLR titled “Generative Ensembles for Robust Anomaly Detection” makes similar observations and seemed to suggest that ensembling can help counter the observed CIFAR/SVHN phenomena unlike what we see in Figure 10. Their criteria also accounts for the variance in model log-likelihoods and is hence slightly different.\n2. Even though Figure 2b shows that SVHN test likelihoods are higher than CIFAR test likelihoods, the overlap in the histograms of CIFAR-train and CIFAR-test is much higher than the overlap in CIFAR-train and SVHN-test. If we define both maximum and minimum thresholds based on the CIFAR-train histogram, it seems like one could detect most SVHN samples just by the virtue that there likelihoods are much higher than even the max threshold determined by the CIFAR-train histogram?\n3. Why does the constant image (all zeros) in Figure 9 (appendix) have such a high likelihood? It’s mean (=0 trivially) is clearly different from the means of the CIFAR-10 images (Figure 6a) so the second order analysis of Section 5 doesn’t seem applicable.\n4. How much of this phenomena do you think is characteristic for images specifically? Would be interesting to test anomaly detection using deep generative models trained on modalities other than images.\n5. One of the anonymous comments on OpenReview is very interesting: samples from a CIFAR model look nothing like SVHN. This seems to call the validity of the anomalous into question. Curious what the authors have to say about this.\n\nMinor nitpick: There seems to be some space crunching going on via Latex margin and spacing hacks that the authors should ideally avoid :)", "\nThis paper displays an occurrence of density models assigning higher likelihood to out-of-distribution inputs compared to the training distribution. Specifically, density models trained on CIFAR10 have higher likelihood on SVHN than CIFAR10. This is an interesting observation because the prevailing assumption is that density models can distinguish inliers from outliers. However, this phenomenon is not encountered when comparing MNIST and NotMNIST. The SVHN/CIFAR10 phenomenon has also been shown in concurrent work [1].\n\nGiven that you observed that SVHN has higher likelihood on all three model types (PixelCNN, VAE, Glow), why investigate a component specific to just flow-based models (the volume term)? It seems reasonable to suspect that the phenomenon may be due to a common cause in all three model types. For instance, the experiments seem to indicate that generalizing density estimation from CIFAR training set to CIFAR test set is likely challenging and thus the models underfit the true data distribution, resulting in the simpler dataset (SVHN) having higher likelihood. \n\nGiven the title of the paper, it would have been nice if this paper explored more than just MNIST vs NotMNIST and SVHN vs CIFAR10, so that the readers can gain a better feel for when generative models will be able to detect outliers. For instance, a scenario where the data statistics (pixel means and variances) are nearly equivalent for both datasets would be interesting. The second order analysis is good but it seems to come down to just a measure of the empirical variances of the datasets. \n\nThis paper is well written. I think the presentation of this density modelling shortcoming is a good contribution but leaves a bit to be desired. \n\n[1] Choi, H. and Jang, E. Generative Ensembles for Robust Anomaly Detection. https://arxiv.org/abs/1810.01392\n\n\nPros:\n- Interesting observation of density modelling shortcoming \n- Clear presentation\n\nCons:\n- Lack of a strong explanation for the results or a solution to the problem \n- Lack of an extensive exploration of datasets\n", "Pros:\n- The finding that SVHN has larger likelihood than CIFAR according to networks is interesting. \n- The empirical and theoretical analyses are clear, seem thorough, and make sense.\n- Section 5 can provide some insight when the model is too rigid and too log-concave (e.g. Gaussian).\nCons:\n- The premises of the analyses are not very convincing, limiting the significance of the paper.\n- In particular, Section 4 is a series of empirical analyses, based on one dataset pair. In 3/4 of the pairs the author tried, this phenomenon is not there. Whether the findings generalize to other situations where the phenomenon appears is uncertain. \n- It is good that Section 5 has some theoretical analysis. But I personally find it very disturbing to base it on a 2nd order approximation of a probability density function of images when modeling something as intricate as models that generate images. At least this limitation should be pointed out in the paper.\n- Some parts of the paper feel long-winded and aimless.\n\n[Quality]\nSee above pros and cons.\nA few less important disagreement I have with the paper:\n- I don't think Glow necessarily is encouraged to increase sensitivity to perturbations. The bijection needs to map training images to a high-density region of the Gaussian, and that aspect would make the model think twice before making the volume term too large.\n- Figure 6(a) clearly suggests that the data mean for SVHN and CIFAR are very different, instead of similar.\n\n[Clarity]\nIn general, the paper is clear and easy to understand given enough reading time, but feels at times long-winded.\nSection 2 background takes too much space.\nSection 3 too much redundancy -- it just explains that SVHN has a higher likelihood when trained on CIFAR, and a few variations of the same experiment.\nSection 4 seems to lack a high-level idea of what it want to prove -- the hypothesis around the volume term is dismissed shortly after, and it ultimately proves that we do not know what is the reason behind the high SVHN likelihood, making it look like a distracting side-experiment.\nA few editorial issues:\n- On page 4 footnote 2, as far as I know the paper did not define BPD.\n- There are two lines of text between Fig. 4 and Fig. 5, which is confusing.\n\n[Originality]\nI am not an expert in this specific field (analyzing generative models), but I believe this analysis is novel.\nHowever, there are papers empirically analyzing novelty detection using generative model -- should analyze or at least cite:\n Vít Škvára et al. Are generative deep models for novelty detection truly better? \n ^ at first glance, their AUROC is never under 0.5, indicating that this phenomenon did not appear in their experiments although a lot of inlier-novelty pairs are tried.\nA part of the paper's contribution (section 5 conclusion) seem to overlap with others' work. The section concludes that if the second dataset has small variances, it will get higher likelihood. But this is too similar to the cited findings on page 6 (models assign high likelihood to constant images).\n\n[Significance] \nThe paper has a very interesting finding; pointing out and in-depth analysis of negative results should benefit the community greatly.\nHowever, only 1 dataset pair is experimented -- there should be more to ensure the findings generalize, since Sections 3 and 4 rely completely on empirical analysis. According to the conclusions of the paper, such dataset pairs should be easy to find -- just find a dataset that \"lies within\" another. Did you try e.g. CIFAR-100 train and CIFAR-10 test?\nSection 5 is based on a 2nd order expansion on the $log p(x)$ given by a deep network -- I shouldn't be the judge of this, but from a realistic perspective this does not mean much.\n", "Thanks for your questions, comments, and compliments. As for considering other divergences / discrepancies, indeed using these for either parameter estimation or evaluation could lead to different results. It is an area of future work. Given the prevalence of fitting models via maximum likelihood (KLD[p_empirical || p_model]), we thought reporting the result for just this divergence a worthy contribution. \n\nAs for your second question, we’re not certain we completely understand your point. Can you clarify a bit more, please? A perceived mismatch between distance in pixel space vs semantic space may be due to natural images having a common global structure. The models then extract mostly the shared structure and not the details that we visually cue upon.", "Thank you for this interesting work. \n\nIt is astonishing that a well-trained CIFAR10 model assigns larger log-likelihood to the SVHN datasets. \n\nWhat confuses me is that why the samples from such models won't generate SVHN-like images. According to your derivation, the SVHN variances is only marginally smaller than CIFAR10 variances, therefore it is probably not due to that SVHN-like figures live in a much smaller subspace that are unlikely to sample from. ", "Thanks very much for the excellent work. It is very interesting to see the distribution from this perspectives. I took a look on the paper Theis2016, it seems besides BPD, KLD, MMD, JSD are considered, is it possible that CIFAR10 and SVHN can be different based on these three measurement?\n\nThis also reminds me of domain shift problem, which aims to align p(x,y), can I understand in this way that although in data space, CIFAR and SVHN are similar (in term of the BPD number), however, in semantic level (y), they are still large gap between this two?\n\nThanks again for the excellent work~~" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, -1, -1, -1 ]
[ "BklXQj-R1V", "rygFbvpYRX", "HJxL_ysjC7", "HJlpx-we1E", "rJxBdmxxyV", "BJgniGai0m", "r1eWc6qjnX", "Skgkfk6tRm", "SyedvzM9Am", "ByxeVRhY0m", "iclr_2019_H1xwNhCcYm", "HJx8SCQEn7", "BJe__C5d3Q", "r1eWc6qjnX", "ByloTnnFRQ", "iclr_2019_H1xwNhCcYm", "BJl6am8z37", "iclr_2019_H1xwNhCcYm", "iclr_2019_H1xwNhCcYm", "iclr_2019_H1xwNhCcYm", "ByefSPJRsm", "iclr_2019_H1xwNhCcYm", "iclr_2019_H1xwNhCcYm" ]
iclr_2019_H1z-PsR5KX
Identifying and Controlling Important Neurons in Neural Machine Translation
Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.
accepted-poster-papers
Strong points: -- Interesting, fairly systematic and novel analyses of recurrent NMT models, revealing individual neurons responsible for specific type of information (e.g., verb tense or gender) -- Interesting experiments showing how these neurons can be used to manipulate translations in specific ways (e.g., specifying the gender for a pronoun when the source sentence does not reveal it) -- The paper is well written Weak points -- Nothing serious (e.g., maybe interesting to test across multiple runs how stable these findings are). There is a consensus among the reviewers that this is a strong paper and should be accepted.
train
[ "BygWvPj7CQ", "rkeX0uhx0Q", "ByeWhune0X", "BklYGw3lRQ", "B1gWh83lCQ", "Byl0raGq2X", "rklAXqtrhX", "r1l4JK8ijQ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revised version incorporating the reviewers's helpful comments. This is the list of changes:\n\n1. Added appendix A.4 with results with different model checkpoints and a footnote referring to the appendix in Section 4. \n2. Added to the Conclusion a potential future work on controlling translations by modifying the decoder. \n3. Mentioned the tense neuron’s mistake on \"Spreads\" in Section 5.3, tense paragraph. \n4. Slightly modified sections 5.2 and 5.3 to emphasize that position is captured by top neurons in some of the unsupervised methods, but not in all. \n5. Clarified that the supervised experiments are used for verifying interpretations, rather than constraining the analysis to a given property that may not be known a priori (Section 3.2, supervised verification). \n6. Added more details on the GMM setup in Section 3.2, supervised verification. \n7. Improved the motivation for using translation control for mitigating model bias in the beginning of Section 6. \n8. Fixed a number of typos and formatting issues. \n", "\n7. “what insight was gained from the SVCCA analyses” and “the pros and cons of each of the methods” \nThank you for the feedback. We will improve the discussion in the next revision accordingly. \nThe different methods aim to analyze representations at different levels of localism/distributivity. In particular, while MaxCorr and MinCorr target pairwise neuron correlations, LinReg searches for information that’s distributed in the whole representation in one network, but localized in a different network. SVCCA tries to find a middle ground by projecting representations to a lower-dimensional space and then computing correlations. \nSome of the insights are discussed in Section 5.2, where we observe that the more distributed methods (LinReg and SVCCA) give much importance to identifying specific tokens. This means that information about token identity is distributed among many neurons. The fact that MinCorr cares a lot about position suggests that this kind of information is captured in multiple models in a similar way. Table 10 in the appendix shows that in addition to detecting specific tokens, SVCCA directions may also capture some classes like adjectives and verbs.", "Thank you for your useful feedback and helpful comments. We are glad that you found our methods promising with good intuitions. We would like to clarify a few points based on your comments.\n\n1. “Most of the activations of the important neurons can be explained using sentence position” \nIt is true that many top ranked neurons capture sentence position, especially in the MinCorr method (Table 1). However, other methods reveal important neurons that do not capture position: only 3/10 top ranked neurons by MaxCorr and LinReg are position neurons, and only one top ranked SVCCA direction captures position. These top neurons often capture linguistic properties like morphological features, punctuations, word classes, etc., as analyzed in Section 5.3 and in appendices A and C. We will make this point clearer in the next revision. \n\n2. “The methods may be able to address the question of *how* localist the representations are (though no numerical measure of localism is proposed)”\nWe also believe that our methods can shed light on the question of how localist the representations are. We would love to try out any suggestions for numerical measures of localism. \n\n3.”why the neurons that track particular properties couldn't be identified using a supervised classifier to begin with” and “most of the insight in the paper seems to be derived from supervised experiments”\nThe advantage of unsupervised methods is that they are not constrained by available supervision in the form of linguistic annotations. Our working process involved visualizations of the important neurons, which led to forming hypotheses about their function. In order to validate our interpretations, we designed supervised experiments whenever we could. While this may give the impression that most insights are derived from the supervised experiments, in practice it would have been difficult to choose specific properties to target without the unsupervised methods + visualizations. \nIn addition, we have found properties that do not correspond to plausible a priori hypotheses. The neuron detecting item numbers which you mention is one such case. We also found a neuron that activates positively on the first word in a noun phrase and negatively in the rest of the phrase (Figure 5). Other properties that may not be expected to emerge include year and month neurons (Figure 6), a neuron that activates on verbs and their surrounding words (Table 7), and neurons that capture both punctuation and conjunctions (Tables 6+7; note that this would not be captured by standard part-of-speech tag sets). \nWe will improve our presentation according to your feedback. \n\n4. “are there neurons that track syntactic dependencies, for example?” \nIn this work we focused mainly on word-level properties. However, we did investigate parentheses, which require longer-range context, and also noun phrases. Still, we agree that it would be interesting to consider more compositional properties such as dependencies or phrase structure. \n\n5. “how the GMMs … were set up” \nThe GMMs were set up to predict a property from a neuron activation. The number of mixture was chosen as the number of different classes in the prediction task. For instance, for finding parenthesis neurons we used two classes (inside or outside of parentheses/quotes/brackets). We estimated the parameters of the GMMs using the mean and variance of the neuron activation conditioned on each class. We tested the resulting model to see how well it predicts the tag from the neuron activation by computing the posterior probability of each class given an activation using Bayes' rule, and taking the argmax. We will provide more details on the GMM in the next revision. \n\n6. “argument that this technique could be used to reduce gender bias in MT” \nOur reasoning in the control experiments is that some information about sensitive attributes like gender may be available from other sources, such as metadata. If we know that an entity has a specific gender (say, feminine), but that gender is unmarked in the English language (as in the word “doctor”), then we may encourage the system to output a translation with the correct gender by modifying gender neurons. This is a kind of soft constraint that we may add to the system. We will improve the motivation in the next revision. ", "Thank you for your very positive review. We are glad that you found the choice of methods justified, and the experiments and analysis thorough and well executed. ", "Thank you for your positive and constructive feedback. We are happy that you find our analysis interesting and valuable for understanding the behavior of neural MT models. We answer specific comments below. \n\n1. “Controlling neurons in the decoder”\nWe are also interested in expanding the controlling experiments, both to other properties and to the decoder side, and intend to do so in the future. We will mention this as potential future work. \n\n2. “use different checkpoints from a single model” \nThank you for bringing this point up. We have compared all checkpoints from a couple of our models and found highly correlated neurons, especially when correlating later checkpoints. This makes sense as the model converges to a solution. We verified that these top correlated neurons are important for the model performance via an erasure experiment similar to Section 5.1. \nMoreover, the top ranked neurons when comparing the last checkpoint to earlier ones are very similar to the ones found when comparing this last checkpoint to different models, including models trained with different target languages. In particular, for the English-Spanish model, we found that 8 out of 10 and 34 out of 50 top ranked neurons are the same in these two rankings. For the English-Arabic model, we found a similar behavior (7 out of 10 and 33 out of 50 top ranked neurons are the same). This indicates that our method may be applied to different checkpoints as well. We will add these results in the next revision. \n\n3. “The findings in this paper do not lead to immediate translation performance improvements” \nThis is correct. Beyond the scientific value in illuminating how NMT models work, we would also like to mention several potential ideas for improving the systems. First, our experiments for controlling specific characteristics may help mitigate model bias by identifying neurons responsible for sensitive attributes such as gender or politeness. For instance, we might have external knowledge of the gender of person mentioned by an ambiguous profession or title in the source language (e.g., doctor), and may want to encourage the translation to be of the correct gender in the target language (as the Turkish example in Section 6 illustrates). More generally, by identifying neurons that are responsible for common mistakes we may be able to improve the system through similar control experiments. Other directions for improving NMT systems are doing model compression by removing unimportant neurons and guiding neural architecture search by tracking important neurons. \n\n4. “Table 4a, two results” \nThank you. We have fixed this typo.\n\n5. Tense neuron activating on “Spreads”\nIndeed, this is a “mistake” of the neuron. We will mention this. \n\n6. \"Our supervised methods\" → \"Our unsupervised methods\"\nFixed. Thank you. \n\n7. “Could SVCCA directions be manipulated”\nThis is difficult to do, as it requires changing all the dimensions, rather than a small number of dimensions, and we’ve noticed that modifying more dimensions leads to performance degradation. Moreover, SVCCA directions mostly detect specific tokens rather than a linguistic property (see Table 1) so controlling is not very intuitive in this case. \n\n8. Missing or misplaced parentheses \nWe have fixed those. Thank you. \n", "This paper presents unsupervised approaches to discover import neurons in\nneural machine translation systems. Some linguistic properties controlled by the\ndiscovered neurons are discussed and analyzed.\n\nStrengths:\n\nThe paper is well-written and provides valuable information to understand the\nbehaviour of neural machine translation models.\n\nThe ability to control characteristics (such as gender) without training\nspecialized models is promising, even if the results are not good enough for\nimmediate use. It would be interesting to see whether controlling neurons\nin the decoder would be more effective.\n\nWeaknesses:\n\nMultiple NMT systems are necessary to discover important neurons. The authors\nmention that it would be possible to use different checkpoints from a single\nmodel, but don't evaluate how well this would work.\n\nThe findings in this paper do not lead to immediate translation performance\nimprovements.\n\nQuestions and other remarks:\n\nIn Table 4a, why are there 2 results for \"-0.25, -0.125, 0\"?\n\nIn section 4.3 (Tense), it may be worthwhile to mention that the neuron is\nhighly activated on the word \"Spreads\", even if it acts as a noun in this\nspecific sentence.\n\nBottom of p. 6: \"Our supervised methods\" -> \"Our unsupervised methods\"\n\nTo control properties, could SVCCA directions or coefficients be manipulated?\n\nSome parentheses around citations are missing or misplaced.\n", "Strengths:\n- even though the methods for detecting important neurons are not novel (as also stated in the paper), their application to MT is novel\n- the presentation is very clear\n- the choice of methods is well argued and justified\n- the experiments are well executed and analysed\n- thorough and varied analysis of the experimental findings \n\nI recommend this paper for the best paper award.", "The authors propose a number of methods to identify individual important neurons in a machine translation system. The crucial assumption, drawn from the computer vision literature, is that important neurons are going to be correlated across related models (e.g. models that are trained on different subsets of the data). This hypothesis is validated to some extent: erasing the neurons that scored highly on these measures reduced BLEU score substantially. However, it turns out that most of the activation of the important neurons can be explained using sentence position. Supervised classification experiments on the important neurons revealed neurons that tracked properties such as the span of parentheses or word classes (e.g., auxiliary verbs, plural nouns, etc).\n\nStrengths:\n* The paper is very well written and provides solid intuitions for the methods proposed.\n* The methods seem promising, and the degree of localist representation is striking.\n* The methods may be able to address the question of *how* localist the representations are (though no numerical measure of localism is proposed).\n* There is a correlation between the neuron importance metrics proposed in the paper and the effect on BLEU score of erasing those neurons from the network (of course, it’s not clear what particular linguistic properties are affected by this erasure - the decrease BLEU may reflect inability to track specific word tokens more than any higher-level linguistic property).\n\nWeaknesses:\n* It wasn't clear to me why the neurons that track particular properties (e.g., being inside a parentheses) couldn't be identified using a supervised classifier to begin with, without first identifying \"important\" neurons using the unsupervised methods proposed in the paper. The unsupervised methods do show their strength in the more exploratory visualization-based analyses -- as the authors point out (bottom of p. 6), the neuron that activates on numbers but only at the beginning of the sentence does not correspond to a plausible a-priori hypothesis. Still, most of the insight in the paper seems to be derived from the supervised experiments.\n* The particular linguistic properties that are being investigated in the classification experiments are fairly limited. Are there neurons that track syntactic dependencies, for example?\n* I wasn't sure how the GMMs (Gaussian mixture models) for predicting linguistic properties from neuron activations were set up.\n* It's nice to see that individual neurons function as knobs that can change the gender or tense of the output (with varying accuracy). At the same time, I was unable to follow the authors' argument that this technique could be used to reduce gender bias in MT.\n* I wasn't sure what insight was gained from the SVCCA analyses -- this method seems to be a bit of a distraction given the general focus on localist vs. distributed representation. In general, I didn’t come away with an understanding of the pros and cons of each of the methods." ]
[ -1, -1, -1, -1, -1, 7, 10, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2019_H1z-PsR5KX", "r1l4JK8ijQ", "r1l4JK8ijQ", "rklAXqtrhX", "Byl0raGq2X", "iclr_2019_H1z-PsR5KX", "iclr_2019_H1z-PsR5KX", "iclr_2019_H1z-PsR5KX" ]
iclr_2019_H1zeHnA9KX
Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''. A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.
accepted-poster-papers
This paper presents experiments showing that a linear mapping existing between the hidden states of RNNs trained to recognise (rather than model) formal languages, in the hope of at least partially elucidating the sort of representations this class of network architectures learns. This is important and timely work, fitting into a research programme begun by CL Giles in 92. Despite its relatively low overall score, I am concurring with the assessment made by reviewer 1, whose expertise in the topic I am aware of and respect. But more importantly, I feel the review process has failed the authors here: reviewers 2 and 3 had as chief concern that there were issues with the clarity of some aspects of the paper. The authors made a substantial and bona fide attempt in their response to address the points of concern raised by these reviewers. This is precisely what the discussion period of ICLR is for, and one would expect that clarity issues can be successfully remedied during this period. I am disappointed to have seen little timely engagement from these reviewers, or willingness to explain why they are stick by their assessment if not revisiting it. As far as I am concerned, the authors have done an appropriate job of addressing these concerns, and given reviewer 1's support for the paper, I am happy to add mine as well.
train
[ "BygbNmtukV", "B1gaaOqVk4", "BkeSqOcN14", "HJg-yN9V1N", "BkewlgpRAQ", "HylORAYn0Q", "ByxyXEWcA7", "H1lkffW5R7", "SygggMZ90X", "S1eeAxb9Am", "B1xVWwh3hm", "HJllpRIqn7", "Bkl2Tb__nX", "S1eW-odEn7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "To summarize my understanding of the author's rebuttal, they're saying that the key result isn't that linear decoders achieve high accuracy in decoding the abstract DFA states, but is instead that the abstract DFAs that are recovered from the \"hierarchical clustering\" process bear some kind of resemblance to the original DFA. Three points about this\n\n1.) If this interpretability of the \"clusters\" is the real crux of the paper, instead of the decodeability referred to in the title, then the title and introduction of the paper really should reflect this.\n2.) I'm not sure what the integers and percentages inside the DFA state diagrams in figure 6 are (I asked about this in my original review but I didn't see an answer unfortunately). As a result, I don't know how the authors mean to interpret the dendrograms built on top of the state diagrams.\n3.) Without knowing exactly what interpretation the authors intend to draw from those dendrograms I don't want to be too categorical about this, but I will say that whatever the interpretation is, its seems very likely to be subject to the cherry-picking issue that R2 brought up. It seems to me like drawing any useful general conclusion from these two examples would be challenging.\n\nTo summarize, (1) the authors responses to my and R2s questions/criticisms suggest that main text of the paper obscures the basic logic of the work, and (2) that basic logic seems to rest entirely on the interpretation of just two examples. \n\nBoth of these points seem quite problematic, so at this time my score remains below the acceptance recommendation threshold.\n\n", "Thank you for pointing us in the direction of these recent works. Both of them encompass the general connection between RNNs and Automata in some way which we believe is a fruitful area of research relating to interpretable models. One extension of our work would be to utilize other RNN frameworks, such as Giles 2nd order RNN which we would hypothesis would likely encode an automata with more accuracy than vanilla RNNs because of it's original intention to do so. We believe that our work can be thought of a parallel to the work in [1]. The work in [2] seems equally important and relevant. Although our work is not focused on extracting Automata from RNNs but rather relating the underlying representations, we believe our work resonates with this paper as well. We will cite both of these papers in our related works section. ", "Reviewer 3, we have addressed many of your concerns in our response to reviewer 2 above. We would like to emphasize that there are 2 significant misunderstanding about the core of our logical conclusions of our paper. We have clarified them in our response to reviewer 2 above. We ask that reviewer 2, reviewer 3, and the area chair please consider these clarifications as we believe that they will significantly affect the evaluation of our work. ", "We believe there are two significant misunderstandings here. First, Reviewer 3 states “if we find the output classes the decoder is most often confused between, then merge them into one class, the decoder's performance increases -- trivially.” The word “trivially” is the problem here, as merging two classes that are easily confused by a highly trained classifier can actually be quite informative. Consider the example of a trained face recognition classifier that easily confuses identical twins. If we merge Twin1 and Twin2 into a single new superclass Twins = {Twin1, Twin2} then the resulting classifier will certainly perform better and for good reason: the twins are highly related and thus have similar looks. Iterating this kind of confusion-based merging is a valid form of hierarchical clustering (e.g. merging plants together, and then animals together, etc. to learn a taxonomy). In short, the increase in prediction accuracy after merging is “trivial”, but the interpretation for why an increase occurs is certainly not: finding classes that are easily confused is important information about the similarity metric learned by the classifier.\n\nThe second misunderstanding involves our earlier response to Reviewer 3 where we state “our paper’s intention was to not make a logical connection between RNNs and automata (…).” This statement has been taken out of context. The critical part of that sentence is in the “(...)”, namely “based on this observation...”. Without that context, it seems like we are negating the core conclusion of our paper -- that there is indeed a connection between the hidden state space of the RNN and that of the MDFA. However, that was not our intent. We were just trying to convey that our conclusion is not based on that particular observation(“It is true that in a classification problem, if you merge the most confused classes together, classification accuracy increases...”); instead, it is based on our experimental results, namely, that merging the most confusable MDFA states yields dendrograms that really tell us important information about how the RNN hidden states are organized. We show two of these dendrograms in the paper (EMAILS and DATES). We emphasize strongly that these are not in any way cherry-picked examples. As stated in our response to Reviewer 1 above: “Our intention behind showing an EMAILS and DATES regular expressions that were formed outside of the aforementioned framework was to show how a typical, easily interpretable recognition algorithm is encoded by the RNN. We didn’t want the reader to be distracted by the regular expression itself but rather bring light to the interpretation of the dendrograms in section 4.5.” In order to alleviate any concerns, we will include figures of the randomly sampled MDFAs and corresponding dendrograms in the Appendix in the final version of the paper.\n\nAs to recognition accuracy, 81% of the RNNs in the linear decoding experiments met the minimum language recognition test accuracy of 95%. If we reduce the threshold to 90%, the fraction increase to 89%. \n", "Reviewer 3, thank you for your review. Does the author response (above) address your major concern? If not, please take the remaining few days to follow on in this discussion. If you are in a position to reconsider your assessment, please do so, and if you stand by your score, please provide a short explanation as to where the rebuttal falls short.", "As to the definition of “low”, the fact that it’s lower than a random baseline doesn’t mean that it’s absolutely *low* (this term is not even well-defined). I am not sure if this is that important, but it seems to be a major claim of this paper which is repeated several times and feels at the very least inaccurate. More importantly, this relates to AnonReviewer3’s concern: there is a simpler explanation to these results which the authors are not addressing. I have seen the authors’ response to this concern and was not convinced: if (to cite the authors’ response), “our paper’s intention was to not make a logical connection between RNNs and automata (…)”, then significant parts of the paper need to be re-written. Based on the response, the contribution of this paper largely relies on two cherry-picked examples.\n\nAs to the response: “For a recognizer RNNs to be included in the decoding experiments, we required a minimum classification test accuracy of 95%.”: which proportion of the cases meet this threshold?", "We thank the reviewer for a careful and thorough review of our paper. \n \nIt is true that in a classification problem, if you merge the most confused classes together, classification accuracy increases trivially. However, our paper’s intention was to not make a logical connection between RNNs and automata based on this observation, but rather to show that on a per-example basis, the most confused states that are merged reveal geometric interpretations behind how the RNN encodes the MDFA. By analyzing the accuracy vs coarseness curves (Figure 7) alongside the dendrograms (Figure 6) for two regular expressions that have a real-world interpretation, we gain a novel interpretation of the similarity between the internal representation of the RNN and the MDFA. We consistently find that MDFA states that are linearly inseparable by the decoder often refer to the same pattern in the original regular expression. Our abstraction method provides an interpretable relationship between these two states as evidenced by our dendrograms. We provide two dendrogram results specifically for regular expressions with clear meaning to showcase these patterns. We will provide more dendrograms in the final version to show how consistent the patterns are.\n\n-Why is the definition of the “accuracy” measurement \\rho more complicated than expected at first glance? \nThe accuracy measure is a quantitative measure predicated on \\delta, f(h_t) and f(h_{t+1}), because we need these mappings to capture the structural similarities between RNNs and the abstraction of the MDFA. The accuracy is an average of averages where we calculate the average over a dataset of strings D, with strings of varying lengths. For each individual string we are interested in the number of alphabets for which the decoding f(.) respects the transition \\delta in the MDFA, when transitioning from h_t to h{t+1}.\n\nWe agree with all of the minor comments and clarity concerns that the reviewer has and will address them in the final version of our paper.\n", "-The regular expression in Figure 6 is incorrect.\nWe thank the reviewer for finding this error. We will replace it with the correct regular expression “[a-d]+@[a-d]+.[v-z]{2,3}” in the final version.\n \n-How come Figure 3a goes up to 1.1? Isn’t it bounded by 1?\nYou are correct that the decoding accuracy mean chart in Figure 3a is bounded by 1. The reason for the unbounded nature is that the error bars represent one standard deviation above and below the estimate of the mean accuracy, which doesn’t necessarily respect the bound as we modeled it as a Gaussian random variable. We agree with the reviewer that the top error bars should be bounded by 1 and will fix this in the final version by using a more appropriate representation such an interquartile ranges. \n\n \n-It is not clear how the shuffling of the characters is considered an independent distribution. The negative sampling procedure should appear in the main text.\nWe believe the reviewer is referring to the term “independent” used in the Appendix under the “Data Generation” section, which is unclear. We did not intend to evoke the statistical meaning, but rather to explain how the two sampling procedures are different. In the camera-ready version of the paper we will replace the word in the sentence with “much different” to clarify.", "We thank the reviewer for the in-depth questions and comments, and look forward to any follow-up questions or concerns.\n \n-The authors claim that the RNN states map to FSA states with *low* coarseness, but Figure 3b (which is never referred to in text…) shows that in most cases the ratio of coarseness is at least 1/3, and in some cases > 1/2.\nWe define coarseness to be “low” when the number of abstractions needed to reach 90% decoding accuracy, as in Figure 3b, is low relative to the number of abstractions needed to reach such a decoding accuracy when abstractions are formed randomly, as opposed to our greedy method of abstracting states. In figure 4a, the area under each plotted curve will be higher if the decoder is able to reach higher accuracies with a fewer number of abstractions (“lower coarseness”.) Following this logic, we have plotted the average area under the curve (AUC) for our strategy, along with the strategy of randomly abstracting states in the appendix of our paper. The added benefit of our method can be seen over random by the increase in average AUC for each collection of MDFAs with M states. We show that the AUC is highest when employing our greedy strategy, indicating that the coarseness is indeed “low” with respect to other abstraction strategies.\n\n-What is the conceptual difference between the two accuracy definitions?\nThe conceptual difference between decoding accuracy and transitional accuracy are two levels of abstraction to viewing the map \\hat{f}. Decoding accuracy asks how well \\hat{f} can map the RNN state to the abstracted NFA state, which is essentially asking a membership query, while preserving the MDFA transitions. Transitional accuracy asks if the mapping accurately preserves the transitions from state s_t to s_{t+1} on the given input a_t in the abstracted NFA. The decoding accuracy requires that the transitions of the MDFA are preserved by the mapping \\hat{f}, while the transitional accuracy considers the transitions in the abstraction.\n \n-Which RNN was used? Which model? Which parameters? Which training regime?\nWe performed an extensive hyperparameter search, varying number of hidden units and layers, mini-batch size, dropout rates, learning rates, and max number of training epochs. The best performing architecture -- one that is able to achieve high validation accuracies across the wide range of regular languages used in our framework -- is a 2 layer, 50 hidden unit vanilla RNN, trained via SGD for 100 epochs with a mini-batch size of 30, dropout probability of 0.4, and learning rate of 0.0003. The inputs to the model was optimized to predict a binary variable under a cross entropy loss. We will include these details in the final paper.\n \n-How were the regular expressions sampled?\nWe randomly sample expressions using a probabilistic context free grammar based on the specification in the bk.brics.automata java documentation (http://www.brics.dk/automaton/doc/dk/brics/automaton/RegExp.html).Two examples are shown in the appendix of the expressions sampled by our framework. Our intention behind showing an EMAILS and DATES regular expressions that were formed outside of the aforementioned framework was to show how a typical, easily interpretable recognition algorithm is encoded by the RNN. We didn’t want the reader to be distracted by the regular expression itself but rather bring light to the interpretation of the dendrograms in section 4.5.\n \nFor transparency and reproducibility, we will release the source code for our framework.\n \n-What is the basic accuracy of the RNN Recognizer?\nFor a recognizer RNNs to be included in the decoding experiments, we required a minimum classification test accuracy of 95%. We will add this detail in the final version of the paper. \n\n\n", "We appreciate the reviewers' comments and suggestions. If the reviewer has any additional follow-up comments or questions, we welcome them.\n\n-Why are the testing accuracies not generally proportional to the complexity of the MDFA? The most complex MDFA of 14 nodes does not have the lowest testing accuracies.\nIn Figure 4, the testing accuracies are not proportional to the complexity of the MDFA due to our method of generating MDFAs in our experiments. Regular expressions are randomly generated by our pipeline and the resulting MDFA is created from the regular expression. We choose to sample in the space of regular expressions as opposed to the space of DFAs because sampling in regular expression space is more meaningful; that is, a valid regular expression that is generated is guaranteed to result in a DFA with desired behavior. If we were to sample in DFA space, it is possible that the resulting DFAs may have had unreachable states and other undesirable behavior. There is, however, no straightforward relationship in terms of complexity between MDFAs and their corresponding regular expressions, leading to the slight differences in proportionality seen in Figures 4 and 5.\n\n-Why not use a simple CFG or PCFG to generate training sequences?\nWe choose regular expressions to generate training sequences for their simplicity, as they allow us to interpret the hidden state of the RNN in terms of the clearly defined states that constitute a regular expressions’ corresponding DFA. There is a substantial amount of literature on the relationship between RNNs and DFAs, but given the little literature surrounding complex regular expressions and DFAs, we want to rigorously explore this space before moving to grammars further up the Chomsky Hierarchy, such as CFGs. Using a CFG or PCFG is a logical next step for our work and is indeed a motivating example.\n\n-Is it possible to generate a regular expression randomly to feed into the RNN?\nYes, is it possible to randomly generate the regular expressions. In our paper, we have developed a framework (Figure 1) for randomly generating regular expressions. At the bottom of section 4.1, we mention that the experiments and results we present are utilizing a dataset of ~500 randomly generated regular expressions in order to get the statistically significant results required in section 4.2, 4.3, and 4.4. \n\n-It would be nice to provide more examples?\nWe agree with this suggestion. Due to space constraints, we did not include more in the main text. We will add more examples to the appendix of the final version of the paper.\n", "This paper investigates internal working of RNN, by mapping its hidden states\nto the nodes of minimal DFAs that generated the training inputs and its \nabstractions. Authors found that in fact such a mapping exists, and a linear\ndecoder suffices for the purpose. \nInspecting some of the minimal DFAs that correspond to regular expressions, \ninduced state abstractions are intuitive and interpretable from a viewpoint of\ntraining RNNs by training sequences.\n\nThis paper is interesting, and the central idea of using formal languages to\ngenerate feeding inputs is good (in fact, I am also doing a different research\nthat also leverages a formal grammar with RNN).\n\nMost of the paper is clear, so I have only a few minor comments:\n\n- In Figures 4 and 5, the most complex MDFA of 14 nodes does not have the\n lowest testing accuracies. In other words, testing accuracies is not\n generally proportional to the complexity of MDFA. Why does this happen?\n\n- As noted in the footnote in page 5, state abstraction is driven by the idea\n of hierarchical grammars. Then, as briefly noted in the conclusion, why not\n using a simple CFG or PCFG to generate training sequences? \n In this case, state abstractions are clear by definition, and it is curious\n to see if RNN actually learns abstract states (such as NP and VP in natural\n language) through mapping from hidden states to abstracted states.\n\n- Because this paper is exploratory, I would like to see more examples\n beyond only the two in Figure 6. Is it possible to generate a regular \n expression itself randomly to feed into RNN?\n", "This paper aims to show that an RNN trained to recognize regular languages effectively focuses on a more abstract representation of the FSA of the corresponding language. \n\nUnderstanding the type of information encoded in the hidden states of RNNs is an important research question. Recent results have shown connections between existing RNN architectures and both weighted (e.g., Chen et al., NAACL 2018, Peng et al., EMNLP 2018) and unweighted (Weiss et al., ACL 2018) FSAs. This paper asks a simple question: when trained to recognize regular languages, do RNNs converge on the same states as the corresponding FSA? While exploring solutions to this question is potentially interesting, there are significant clarity issues in this paper which make it hard to understand it. Also, the main claim of the paper — that the RNN is focusing on a low level abstraction of thew FSA — is not backed-up by the results.\n\nComments:\n\n— The authors claim that the RNN states map to FSA states with *low* coarseness, but Figure 3b (which is never referred to in text…) shows that in most cases the ratio of coarseness is at least 1/3, and in some cases > 1/2. \n\n— Clarity:\nWhile the introduction is relatively clear starting from the middle of section 3 there are multiple clarity issues in this paper. In the current state of affairs it is hard for me to evaluate the full contribution of the paper.\n\n- The definitions in section 3 were somewhat confusing. What is the conceptual difference between the two accuracy definitions? \n\n- When combining two states, does the new FSA accept most of the strings in the original FSAs? some of them? can you quantify that? Also, figure 6 (which kind of addresses this question) would be much more helpful if it used simple expressions, and demonstrated how the new FSA looks like after the merge.\n\n- section 4 leaves many important questions unanswered:\n1. Which RNN was used? which model? which parameters? which training regime? etc.\n2. How were the expressions sampled? the authors mention that they were randomly sampled, so how come they talk about DATE and EMAIL expressions?\n3. What is the basic accuracy of the RNN classifier (before decoding)? is it able to learn to recognize the language? to what accuracy? \n\n- Many of the tables and figures are never referred to in text (Figure 3b, Figure 5)\n\n- In Figure 6, there is a mismatch between the regular expression (e.g., [0-9]{3}….) and the transitions on the FSA (a-d, @).\n\n- How come Figure 3a goes up to 1.1? isn’t it bounded by 1? (100%?)\n\n- The negative sampling procedure should be described in the main text, not the appendix. Also, it is not clear how come shuffling the characters is considered an independent distribution.\n\n", "Paper Summary -\nThe authors trained RNNs to recognize formal languages defined by random regular expressions, then measured the accuracy of decoders that predict states of the minimal deterministic finite automata (MDFA) from the RNN hidden states. They then perform a greedy search over partitions of the set of MDFA states to find the groups of states which, when merged into a single decoder target, maximize prediction accuracy. For both the MDFA and the merged classes prediction problems, linear decoders perform as well as non-linear decoders.\nClarity - The paper is very clear, both in its prose and maths.\nOriginality - I don't know of any prior work that approaches the relationship between RNNs and automata in quite this way.\nQuality/Significance - I have one major concern about the interpretation of the experiments in this paper.\n\nThe paper seems to express the following logic:\n1 - linear (and non-linear) decoders aren't so good at predicting MDFA states from RNN hidden states\n2 - if we make an \"abstract\" finite automata (FA) by merging states of the MDFA to optimize decoder performance, the linear (and non-linear) decoders are much better at predicting this new, smaller FA's states.\n3 - thus, trained RNNs implement something like an abstract FA to recognize formal languages.\n\nHowever, a more appropriate interpretation of these experiments seems to be:\n1 - (same)\n2 - if we find the output classes the decoder is most often confused between, then merge them into one class, the decoder's performance increases -- trivially. in other words, you just removed the hardest parts of the classification problem, so performance increased. note: performance also increases because there are fewer classes in the merged-state FA prediction problem (e.g., chance accuracy is higher).\n3 - thus, from these experiments it's hard to say much about the relationship between trained RNNs and finite automata.\n\nI see that the \"accuracy\" measurement for the merged-state FA prediction problem, \\rho, is somewhat more complicated than I would have expected; e.g., it takes into account \\delta and f(h_t) as well as f(h_{t+1}). Ultimately, this formulation still asks whether any state in the merged state-set that contains f(h) transitions under the MDFA to the any state in the merged state-set that contains f(h_{t+1}). As a result, as far as I can tell the basic logic of the interpretation I laid out still applies.\n\nPerhaps I've missed something -- I'll look forward to the author response which may alleviate my concern.\n\nPros - very clearly written, understanding trained RNNs is an important topic\nCons - the basic logic of the conclusion may be flawed (will await author response)\n\nMinor -\nThe regular expression in Figure 6 (Top) is for phone numbers instead of emails.\n\"Average linear decoding accuracy as a function of M in the MDFA\" -- I don't think \"M\" was ever defined. From contexts it looks like it's the number of nodes in the MDFA.\n\"Average ratio of coarseness\" -- It would be nice to be explicit about what the \"ratio of coarseness\" is. I'm guessing it's (number of nodes in MDFA)/(number of nodes in abstracted DFA).\nWhat are the integers and percentages inside the circles in Figure 6?\nFigures 4 and 5 are difficult to interpret because the same (or at least very similar) colors are used multiple times.\nI don't see \"a\" (as in a_t in the equations on page 3) defined anywhere. I think it's meant to indicate a symbol in the alphabet \\Sigma. Maybe I missed it.", "This is a nice piece of work, well-written, on a hot topic, providing an interesting novel approach and some important insights.\n\nI would like to point out 2 recent works on the matter that could be interesting to discuss in the paper if accepted:\n\n- In [1], the authors prove the equivalence between linear 2-order RNN and weighted automata. The linearity restriction clearly echoes the one of this paper.\n\n- In [2], the authors show that non-linear RNN can be efficiently approximated by weighted automata, suggesting as strong link between the states of the automata and the inner representation of RNN, as in this paper.\n\n[1] Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning, Guillaume Rabusseau, Tianyu Li, Doina Precup, https://arxiv.org/abs/1807.01406\n\n[2] Explaining Black Boxes on Sequential Data using Weighted Automata, Stephane Ayache, Remi Eyraud, Noe Goudian, https://arxiv.org/abs/1810.05741" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1 ]
[ "HJg-yN9V1N", "S1eW-odEn7", "BkewlgpRAQ", "HylORAYn0Q", "ByxyXEWcA7", "SygggMZ90X", "Bkl2Tb__nX", "HJllpRIqn7", "HJllpRIqn7", "B1xVWwh3hm", "iclr_2019_H1zeHnA9KX", "iclr_2019_H1zeHnA9KX", "iclr_2019_H1zeHnA9KX", "iclr_2019_H1zeHnA9KX" ]
iclr_2019_H1ziPjC5Fm
Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks
Visual Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by strided operations in deconvNet-based visualizations. Moreover, we introduce an8Flower , a dataset specifically designed for objective quantitative evaluation of methods for visual explanation. Experiments on the MNIST , ILSVRC 12, Fashion 144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest.
accepted-poster-papers
This was a difficult decision to converge to. R2 strongly champions this work, R1 is strongly critical, and R3 did not participate in the discussions (or take a stand). On the one hand, the AC can sympathize with R1's concerns -- insights developed on synthetic datasets may fail to generalize and fundamentally, the burden is not on a reviewer to be able to provide to authors a realistic dataset for the paper to experiment on. Having said that, a carefully constructed synthetic dataset is often *exactly* what the community needs as the first step to studying a difficult problem. Moreover, it is better for a proceeding to include works that generate vigorous discussions than the routine bland incremental works that typically dominate. Welcome to ICLR19.
val
[ "Byx6M0cC2Q", "HJgSnw53nQ", "Bkg5s3Op2m", "rJxqomaeCX", "rJeWC7TgAX", "SJx10L6lRQ", "Sye4lwalA7", "B1eE1fpxCm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Pros:\n\nThis paper\n - Proposes a method for producing visual explanations for deep neural network outputs,\n - Improves quality of the guided backprop approach for strided layers by converting stride 2 layers to stride 1 and resampling inputs (improving on a longstanding difficulty with such approaches),\n - Shows fairly rigorous experimentation demonstrating the applicability and properties of the proposed approach, and\n - Releases a new synthetic dataset and benchmark for visual explanation methods.\n\nAlthough producing visual explanations is a task fraught with difficulty for many reasons, including that explanations for complex decisions may not necessarily be communicable via one or a small number of saliency maps over the image pixels, this paper strives valiantly in this admittedly difficult direction.\n\nThe experimentation is fairly rigorous, which is a welcome departure from and improvement on the norm for this type of paper. I hope such more quantitative evaluation will become more common in papers evaluating visual explanations.\n\nCons:\n\nWhat about features that are very important but not linearly predictive on their own? This approach (and many others) would not work in that case; recognizing this, extending the an8Flower dataset to include such images and labels may be motivating for the field. For example, flowers where the class is determined not by a specific single color or feature (thorns or spots) but by the combination. In these cases, it’s not clear what the right answer would even be in the form of a saliency map, so the first task for researchers would be to determine in what format the answer should even be provided! So: less a benchmark than a motivating open question.\n\n\nSmaller notes:\n\nI found the presentation of the stride 1 resampling approach a little confusing. When performing the backward pass through the network from, say, layer 20, is the approach followed at every stride 2 layer on the way back? If so, I don’t think I saw this mentioned. If not, wouldn’t artifacts be introduced and compounded at any stride 2 layer during the backward pass?\n\n\n====== Update 12/12/18 ======\n\nThanks for your notes in reply. I'll just add that if the dataset can be extended to slightly greater complexity either for this version or for submission to a subsequent venue, it would be impactful. Simple extensions could include scenes with multiple flowers and classes where the explanatory factor is tricker to uncover. For example, a dataset could be created with scenes of three flowers: two of one color and one of another color, with the class determined by the color of the lone flower. The correct explanation (the color of the lone flower) is still clear, and it would be great to see if the proposed LASSO approach (or a future approach) could correctly identify those pixels.", "Summary: the paper proposes a method for Deep Neural Networks (DNN) that identifies automatically relevant features of the set of the classes, enriching the predictions made with the visual features that contributed to that class, supporting, thus, interpretation (understanding what the model has learned) and explanation (justification of the predictions/classifications made by the model). This scheme does not rely on additional annotations, like earlier techniques do.\n\nThe contributions of this paper are relevant to, I would say, a large segment of the AI community, since interpretability and explainability of AI (XAI) is the focus of many current works in the area, and there are still many unresolved issues. I consider this paper suitable for ICLR 2019, in particular, it fits the call for papers topic “visualization or interpretation of learned representations”.\n\nThe authors also present a new dataset (am8Flower) that can be used by the community for future evaluations of explanation methods for DNN. From my point of view, this is a significant contribution, since there is a lack of datasets that can be used for evaluation.\n\nThe authors motivate properly the need for this research/study, addressing the main weakness of the two more common strategies for interpreting DNN, (1) manually inspecting visualizations of every single filter or (2) comparing the internal activations produced by a given model w.r.t. a dataset with pixel-wise annotations of possibly relevant concepts.\n\nI would encourage the authors to write the limitations and weakness of their proposal w.r.t. similar approaches they reviewed. I am aware that the space is limited, but in p.8, section 4.3, when Table 1 is introduced and the authors confirm that their proposal has higher IoU than other methods, the authors could explain, in brief, what are the weaknesses of their method w.r.t. the other approaches analyzed.\n\nAnother clarification concerns the initialization of input parameters, such as sparsity; e.g., p.6 sparsity is initialized with 10 for all datasets, why? How has this value been selected and how sensitive is the performance regarding variations of this value?\n\nOnce again, I know that the space is limited, but I would like to be able to see some of the figures better (since this is an essential part of the paper). The additional material complements very well the paper and shows larger figures, but I think that the paper itself should be self-sufficient, and figures like Fig. 5 should be enlarged so it is easier to see some details.\n\nJust a concern or something that I quite did not understand about one of the arguments the authors use to justify the evaluation carried out: the authors claim that they want to avoid the subjectivity introduced by humans (citing Gonzalez-Garcia et al. 2017), and prefer to avoid user studies, presenting a more objective approach in their evaluation. Ok, but then, the analysis presented in, for example, page 7, is based mainly in their interpretation of the results, a qualitative analysis of the images (we can see fur patterns, this and that, etc.). So aren’t they interpreting the results obtained as users? So after all, aren’t the visual explanations and feedback intended for users? Why should we claim that we want to avoid the subjectivity introduced by humans in the evaluation when the method proposed here is actually going to be used by users –with their inherent subjectivity? I do not mean that the evaluation carried out is not interesting per se, but it could be motivated differently, or it could be complemented later on with future user studies (that would make an interesting addition to the paper). Moreover, I also wonder whom the authors see as intended users for the proposed scheme.\n\nSmall comments:\nP.1 “useful insights on the internal representations”  insights into the internal representations.\nP. 2: space needed in “back-propagation methods.Third,”\nP. 3: Remove “s” in verb (plural authors): “Similarly, Bach et al. (2015) decomposes the classification”  decompose or decomposed\nP.3: n needed “Chattopadhyay et al. (2018) exteded”  extended\nP.3: “This saliency-based protocol assume that”  protocol assumes\nP.3: “highlighted by the the explanations”  remove one “the”\nP. 5: “space. As as result we get”  remove one “as”\nP. 5: “and compensate this change”  compensate for this change\nP. 6: “In this experiment we verify”  In this experiment, we verify\nP. 6: “To this end, given a set of identified features we”  To this end, given a set of identified features, we\nP. 6: “Note that the OnlyConv method, makes the assumption”  remove “,” after method\nP. 7: “In order to get a qualitative insight of the type of”  insight into the\nP. 7: I would write siamese and persian cat with capital “S” and “P” (Siamese, Persian)\nP. 7: others/ upper “Some focus on legs, covered and uncovered, while other focus on the upped body part.”  while others focus on the upper body part\nP. 7: “These visualizations answers the question”  answer\nP. 7: “In this section we assess”  In this section, we\nP. 7: Plural “We show these visualization for different”  these visualizations\nP. 7: In “Here our method reaches a mean difference on prediction confidence”  difference in prediction …\nP. 7: “This suggest that our method is able”  This suggests that\nP. 8: state-of-the-art\nP. 8: “has higher mean IoU”  has a higher mean IoU\nWhole document: when using “i.e.” add “,” after: i.e.,\n\nReferences: Some of the references in the list have very little information to be able to find it/proper academic citation, e.g. , Yosinski et al. 2015; Vedaldi and Lenc, 2015:\n\nJason Yosinski, Jeff Clune, Anh Mai Nguyen, Thomas J. Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. 2015.\n\nA. Vedaldi and K. Lenc. Matconvnet: Convolutional neural networks for matlab. In MM, 2015.\n\nRef Doersch et al.: What makes paris look like paris?  Paris\n", "In this paper, the authors proposed a novel scheme to interpret deep neural networks’ prediction by identifying the most important neurons/activations for each category using a Lasso algorithm.\n\nFirstly, the authors produce a 1-dimensional descriptor for each filter in each convolutional layer for each image. Then these descriptors are concatenated as a new feature vector for this image. A feature selection algorithm (u-Lasso) is then trained to minimize the classification loss between the prediction from the new feature vector and the original prediction from DNN (formula (1)). Finally, the importance of each filter is identified by the weights of the lasso for each category.\n\nThe authors also improved the visual feedback quality over the deconvolution+guided back-propagation methods, and release a new synthetic dataset for benchmarking model explanation.\n\nThe paper is well-written, however, I have several concerns about this paper:\n\n1. How to verify the importance of the identified relevant features is a problem. In the experiments, the authors removed features in the network by setting their corresponding layer/filter to zero. The authors only compared their method with randomly removing features. And in Fig 4, the differences seem small for ImageNet. The results are not convincing enough to me. It is a bit baffling randomly removing features did almost as well as the proposed approach.\n\n2. I don't think one should get away with only showing some results from the synthetic dataset without showing any quantitative results on any real datasets. I like the idea of having a synthetic dataset where all the parameters are controllable. However in this case it is very simple and maybe lacking enough distracting features that can really test the capability of the algorithm. I would believe quantitative results on a realistic dataset are still necessary for the pubilcation of this paper.\n\n3. Recently several papers pointed out some significant issues in Guided BP, \n\nXie et al. A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations. ICML 2018\nAdebayo et al. Sanity Checks for Saliency Maps. NIPS 2018\nKindermans et al. The (Un)reliability of saliency methods. NIPS workshop 2017\n\ncan the authors comment on that? Based on those papers I don't seem to think Guided BP is actually doing anything that is relevant to the classification, but is just finding prominent gradients. This, unfortunately would lead to reasonably good behavior on the synthetic dataset created by the authors. ", "\nWe consider as potential users of our method,\n \n- For dataset debugging\nOur method can assist other researchers on verifying whether a top-performing model has indeed learned a general representation from the data or this top performance is caused by any bias on the data itself, e.g. dataset bias.\n\n- For accountability\nIt can serve students, researchers and other individuals working with deep models identify relatively not apparent causes for the high performance of a given model. For instance, our method has provided a set of good insights of why the models trained in Wang et al., WACV'18 have such a good performance when compared with human subjects.\n\n- To enforce fairness\nVery related to the previous case, individuals tasked with assessing the \"fairness\" of models making decisions about other individuals (e.g Gender Shades, Buolamwini et al., PMLR'18). Our method can help to verify whether these models have any bias related to genre, ethnicity, etc.\n\n- Alternatively, our visual explanations can serve to indicate regions of interest that can serve as input to other automatic systems/methods aiming at distilling information from the model being explained with the goal of producing lighter models.\n\n\nWe thank the reviewer for the time invested in providing the detailed feedback at the end of the review (small comments and references). Likewise, we will invest time to meticulously integrate this feedback on a revised version of our manuscript.\n\nWe have revised the manuscript in order to integrate the provided feedback. \n(New content is coloured in green)", "Thanks for the feedback.\n\nWe appreciate the reviewer recognizes the relevance that our work can have in the field in general.\nWe agree with the reviewer on the significance of the contribution given by the proposed an8Flower dataset.\nAlthough very recently few works (Adebayo et al.,NIPS'18, Nie et al., ICML'18) have proposed means to assess the sanity/reliability of visual explanation methods, no method has been proposed to objectively evaluate the generated explanations themselves. \nMoreover, the proposed an8Flower dataset can be further extended to evaluate different settings of interest, e.g. occurrence of distracting objects, object classes driven by contextual information, fine-grained differences classes, etc., and can be used as a sanity check itself to verify whether a proposed explanation method can accurately explain a specific setting of interest.\n\nRegarding the suggestion of providing a discussion covering the limitations/weaknesses of the proposed method w.r.t. similar compared methods, e.g. those from Table 1.\nIndeed, there are space limitations in place, as was pointed out by the reviewer. \nHowever, this is a good suggestion and we believe that adding such discussion would provide further insights on the proposed method, and strengthen the manuscript at the same time.\nHere is a summary of limitations that our method has w.r.t. similar compared methods:\n- Our method requires an additional process, i.e. feature selection via u-lasso, at training time (Sec. 3.1).\n- There is the need to define an additional parameter, i.e \\mu, for the feature selection process (Sec. 3.1).\n\nRegarding the sparsity parameter (\\mu) used for the feature selection process (u-lasso): increasing the sparsity value \\mu in the u-lasso formulation will increase the number of selected features.\nThis will allow to the selected filters to focus on more specific/specialized features that can help to handle better outlier/rare instances of the classes of interest. Please see, Sec.8 and Fig.9 from the supplementary material for an extra analysis on the effect that the \\mu value has on the capability of the selected features to serve as indicators of the classes of interest.\nWe decided to start from a relatively low value, i.e. \\mu=10, in order to focus on a small set of relevant features that can generalize to the classes of interest while, at the same time, keeping the u-lasso optimization with a low computational cost.\n\nRegarding the size of the figures (Fig.5 especially), we totally agree with the reviewer that figures like Fig. 5 should be enlarged so it is easier to see some details. Despite the space limitations, we are aware that the 8-page length for the manuscript (content only) is not strict, and that authors are allowed to go up to 10 pages. Having said this, if reviewers and ACs agree on the need for larger figures, we would like to cross the 8-page length and include larger versions of some of the figures that are currently too small to visualize details.\n\nRegarding the question of whether human inspection (or user studies) are necessary for model interpretation/explanation.\nWe agree with the observation made by the reviewer regarding the fact that our method still requires some level of human intervention. Furthermore, we agree that since the proposed method is meant to be used by users, an indication of how \"understandable\" an explanation is for end users is required. Having said this, the main goal of our method is to reduce the load on the user side which can introduce bias and noise. By reducing (and separating) the number of visualizations (i.e. number of features in the explanation visualizations and the relevant set of features learned by the model [interpretation]) to be inspected, we aim at reducing exhaustive inspections that are used in previous works to achieve model interpretation/explanation.\nWe admit that in our manuscript, the need for human inspection is understated. Moreover, we agree that our objective evaluation should be complemented with relatively simpler user studies in order to ensure that the produced explanations are meaningful to the individuals they aim to serve. We will update the motivation behind our method in order to emphasize further the need of reduced used inspection and the complementary between our evaluation and user studies.\n", "\nRegarding (3), thanks for the pointers towards those works. Indeed there are some interesting insights there that we can address from the perspective of our method.\n\nAs pointed by the reviewer, our method identified important features for each of the classes modeled by the network using a u-Lasso optimization. Then, at test time, we explain the class predicted by the model by, first, looking for the response (on the test image) of the subset of features identified as relevant for such class, and then, generating heatmaps highlighting the top responding features via our variant of DeconvNet with Guided Backpropagation. Each of these feature visualizations is generated by using the DeconvNet with Guided Backpropagation method to highlight the image regions that produce the activations observed for the relevant features.\nAs such our method is composed by two main components: a) the feature selection component, and b) the visualization component. At test time, these two components are linked by the class predicted by the model.\n\nKindermans et al., NIPS'17 (ws), propose a shift-test in which the explanations produced by a image-model pair should match that of its shifted counterpart.\nIn our case, relevant features are identified by applying the u-Lasso optimization on the internal activations. If these activations remain constant (as enforced by the shift-test) no difference in the selected features (filter/layers) is to be expected.\n\nNie et al.,ICML'18 and Adebayo et al., NIPS'18 suggest that explanations from DeconvNet and Guided-Backpropagation methods are not determined by the predicted class, but by the filters of the first layer and the edge-like structures in the input images.\nRegarding the question of whether this is the reason why our method has good performance on the proposed dataset, it can be noted that the explanations generated by our method in the proposed dataset go beyond regions with prominent gradients (edge-like regions). In fact, in classes where color is a discriminative feature uniform regions are highlighted. \nMoreover, in our method, we use DeconvNet with Guided-Backpropagation as means to highlight the image regions that justify the identified relevant features, not the predicted classes themselves.\n\nRegardless of the observations made in the referred works, the adopted DeconvNet+GBP method is just the means we use for visualization. This visualization method does not influence the way in which relevant features are selected but in the way they are visualized. So if a better, more robust/principled, visualization method is proposed in the literature it can be integrated into an upgraded version of our method. We don't see the current visualization mechanism as a major weakness but as a point that can be improved as new insights related to visualization of internal features of DNNs are obtained by the community.\n\nWe thank the reviewer for motivating the discussion in the direction of these aspects. We believe the discussion above helps to get an insight on strengths and potential points for improvement of the proposed method. If the other reviewers and ACs agree, we would like to add the discussion above in a revised version of the manuscript. Additionally, depending on the time (and space) constrains, we will try to add some of the tests presented in the referred papers from above in a revised version of our submission.\n\nWe have revised the manuscript in order to integrate the provided feedback. \n(New content is coloured in green)", "Thanks for the feedback.\n\nRegarding (1), the ablation of features labeled as \"random\" refers to settings where features were removed by setting to zero the response of randomly selected filters from layers that were indicated to host important features by the u-lasso optimization. \nAs such, these features (filter responses) are not 100% random per se. To verify this aspect, we have conducted an experiment on the full imageNet dataset where we ablated completely randomly selected features (i.e both layers and filter locations). We computed the mean performance after 5 runs and obtained a classification accuracy of 0.33, which is 10% higher than that when the selected relevant features are ablated (0.23).\nIn addition, different from the other datasets tested with a VGG-based method, the setting of the full imageNet dataset has the highest ratio between classes of interest and features. At this higher ratio, features internally modeled by the network are more likely to be shared between the classes. As such, ablating one feature may have a side effect on another class as well.\n\n\nRegarding (2), we respectfully disagree. Our synthetic dataset may look simple and artificial, but that's on purpose to make it clear beyond discussion what elements are crucial to explain a decision. To the best of our knowledge there isn't any realistic dataset with such annotation and in fact, we have no idea how one would go about to create one. Nor is there any other unbiased quantitative evaluation setup using realistic data, as far as we know. For instance, using semantic labels as done in (Zhang et al, CVPR'18 / arXiv:1710.00935… ignores the validity of any context cues that fall outside of the object boundaries. In our synthetic dataset, the regions to be highlighted are controlled by design, therefore providing an objective means of evaluation. If however the reviewer can point us towards a realistic dataset which such level of annotation, we would be happy to try it out, to further strengthen our manuscript.\n\nRegarding the presence of distracting features mentioned in (2), we are conducting experiments on the classification task of Pascal VOC'07. In this dataset there are several distracting instances/objects per image. Initial results show that despite the presence of these distracting objects/elements, our method is able to highlight image regions related to the prediction made by the model. If requested by the reviewers, we will revise the manuscript in order to include some of these new results.\nIn addition, adding distracting objects/features could be an interesting way to extend our current synthetic dataset. We will work towards having an additional variant of our dataset that includes distracting elements for the moment of its official release.", "\nWe thank you for the motivating feedback. As you mentioned visual explanation is a task fraught with difficulty for many reasons, and indeed here we try to push efforts addressing this task, with important neuron selection, better visualization and especially new complementary objective evaluation protocols.\n\nRegarding the comment on the features that are very important but not linearly predictive: we tried to cover that scenario to some extent with the an8flower-double-12c variant of our dataset (please see Fig. 10 from the supplementary material). There the classes are not just defined by the color, but by the part/location where those colors are applied. Yet, this is just one scenario; there are many others that might be interesting to investigate. In its current form, an8Flower is just an initial step towards more objective evaluation. Taking into account the feedback from reviewers and from the community, we hope to turn it into a fully developed benchmark for visual explanations.\n\nRegarding the comment \"...so the first task for researchers would be to determine in what format the answer should even be provided!\". Indeed that is a very good point and an interesting research question. We hope to be able to tackle such questions in future work.\n\nRegarding the smaller note on the stride 2 resampling approach: yes, you are right, this adjustment is applied to every layer during the backward pass. Otherwise, as accurately noted, artifacts produced at top layers would propagate towards the lower ones. \nWe thank the reviewer for pointing this out. We will revise the manuscript to make sure this aspect is clear.\n\nWe have revised the manuscript in order to integrate the provided feedback. \n(New content is coloured in green)" ]
[ 8, 5, 4, -1, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2019_H1ziPjC5Fm", "iclr_2019_H1ziPjC5Fm", "iclr_2019_H1ziPjC5Fm", "HJgSnw53nQ", "HJgSnw53nQ", "Bkg5s3Op2m", "Bkg5s3Op2m", "Byx6M0cC2Q" ]
iclr_2019_HJE6X305Fm
Don't let your Discriminator be fooled
Generative Adversarial Networks are one of the leading tools in generative modeling, image editing and content creation. However, they are hard to train as they require a delicate balancing act between two deep networks fighting a never ending duel. Some of the most promising adversarial models today minimize a Wasserstein objective. It is smoother and more stable to optimize. In this paper, we show that the Wasserstein distance is just one out of a large family of objective functions that yield these properties. By making the discriminator of a GAN robust to adversarial attacks we can turn any GAN objective into a smooth and stable loss. We experimentally show that any GAN objective, including Wasserstein GANs, benefit from adversarial robustness both quantitatively and qualitatively. The training additionally becomes more robust to suboptimal choices of hyperparameters, model architectures, or objective functions.
accepted-poster-papers
The paper provides a simple method for regularising and robustifying GAN training. Always appreciated contribution to GANs. :-)
train
[ "SJlbkwLcRm", "rklFwIBK2Q", "BkggGxgM6X", "H1eRJegGaQ", "HJlipkeGp7", "SygU5RN927", "Hkg6Ixk93Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the detailed feedback. Some of my issues are addressed in the feedback and it would be better to clarify them in the revised paper. Now I change my rating to 6. The reason why I cannot give 7 is the missing analysis of robust D leading to better G.", "## Overview\n\nThis paper proposes a new way to stabilize the training process of GAN by regularizing the Discriminator to be robust to adversarial examples. Specifically, this paper proves that a discriminator which is robust to adversarial attacks also leads to a robust minimax objects. Authors provide theoretical analysis about the how the robustness of the Discriminator affects the properties of the objective function, and the proposed regularization term provides an efficient and effective way to regularize the discriminator to be robust. However, it does not build connection between the robustness of the Discriminator and why it can provide meaningful gradient to the Generator. Experimental results demonstrate the effectiveness of the proposed method. This paper is easy to understand.\n\n\n## Drawbacks\nThere are some problems in this paper. First, this paper is not highly motivated and lacks of intuition. I can hardly understand why the robustness can stabilize the training of GAN. Will it solve the problem of gradient vanishing problem or speed up the convergence of GAN? The toy example in Sec. 4.2 shows that it can regularize the Discriminator to provide a meaningful gradient to Generator, but no theoretical analysis is provided. The main gap between them is that the smoothness of D around the generated data points does not imply the effectiveness of gradients. Second, the theoretical analysis is inconsistent with the experimental settings. Theorem 4.3 holds true when f is non-positive, but WGAN’s loss function can be positive and this paper does not give any details about this part. Third, in Sec. 4.2, I can hardly distinguish the difference between robust loss, robust discriminator and regularized objectives.\n\nBesides, there are lots of typos in this paper. In Sec 3, Generative Adversarial Networks part, the notations of x and z are quiet confusing. In Definition 3.2, d which measures the distance between network outputs is not appeared above.\n\n## Summarization\nGenerally, this paper provides a novel way to stabilize the training of GAN. However, it does not illustrate its motivation clearly and no insight is provided.\n\n## After rebuttal\nSome of the issues are addressed. So I change my rating to 6.\n", "We thank the reviewer for the insightful feedback. We’re glad the reviewer liked the paper. We will make the code public upon acceptance.", "We thank the reviewer for the feedback. We are glad you liked the paper.\n", "We thank the reviewer for her/his time and the constructive feedback. We are glad that the reviewer sees our contribution as a novel idea. We address the main concerns below.\n\n> this paper is not highly motivated and lacks (of) intuition. I can hardly understand why the robustness can stabilize the training of GAN.\n\nAs the reviewer rightly pointed out the connection between smoothness of the objective and ease (or stability) of training is an empirical one.\n“The toy example in Sec. 4.2 shows that it can regularize the Discriminator to provide a meaningful gradient to Generator, but no theoretical analysis is provided”\nWe are not the first paper to establish this empirical connection, WGAN and follow-up work already established this on a wide range of generative tasks. However, we are the first to point out that this smoothness / robustness is not a property of WGAN, but rather the regularization used to optimize the discriminator in any GAN.\n\nIn addition, we show empirically in Table 2 that robustness leads to much more stable training (and better generation performance) than theoretically motivated stability results such as Instance Noise. However, the reviewer is right that there is no theoretical connection, which is an important avenue of future work, but beyond the scope of this paper.\n\n\n> the theoretical analysis is inconsistent with the experimental settings. Theorem 4.3 holds true when f is non-positive, but WGAN’s loss function can be positive and this paper does not give any details about this part.\n\nThe reviewer is right that Theorem 4.3 only applies to the JS (GAN) and LS (LSGAN) objectives for which the regularization works best. Theorem 4.3 does not say anything about linear (WGAN) objectives. For WGAN style objectives the original WGAN paper showed robustness results under slightly different conditions. We tried to extend our results to linear objectives, but did not yet succeed (we could not proof or disproof Theorem 4.3 for linear objectives).\nWe still included WGAN results in the results to establish the empirical connection between regularization and robustness. While we can not prove robustness for linear objectives, it still holds in practice.\nHowever, if the reviewer finds this distracting and confusing we are happy to edit or remove parts of the experimental section to make it more consistent.\n\n> in Sec. 4.2, I can hardly distinguish the difference between robust loss, robust discriminator and regularized objectives.\n\nBoth robust loss and discriminator pose a hard constraint on the discriminator (either before or after the loss function). These hard constraints are difficult to optimize (see WGAN), but easy to analyze. The regularized objective is easy to optimize as a regularization (soft constraint) between two generative distributions (original and perturbed). Theorem 4.3 shows that the regularized objective can be reduced to hard constraints for the JS and LS objectives, and thus benefits for all the analysis of the hard constraints.\n\nWe will update the paper to better highlight this difference.\n\n> Typos and notation\nWe thank the reviewer for pointing the typos and notational inconsistencies out, and will fix them in the next iteration.\n", "The paper proposed a systematic way of training GANs with robustness regularization terms. Using the proposed method, training GANs is smoother and \n\npros\n- The paper is solving an important problem of training GANs in a robust manner. The idea of designing regularization terms is also explored in other domains of computer vision research, and it's nice to see the its power in training GANs.\n- The paper provides detailed proofs and analysis of the approach, and visualizations of the regularization term help people to understand the ideas.\n- The presentation of the approach makes sense, and experimental results using several different GANs methods and competing regularization methods are extensive and good in general\n\ncons\n- I didn't find major issues of the paper. I think code in the paper should be made public as it could potentially be very useful for training GANs in general.", "The main idea that this paper presents is that making a discriminator robust to adversarial perturbations the GAN objective can be made smooth which results in better results both visually and in terms of FID. In addition to the proposed adversarial regularisation the authors also propose a much stronger regularisation called robust feature matching which uses the features of the second last layer of the discriminator. I find the ideas presented in this paper interesting and novel.\nThe authors' claims are supported with sufficient theory and several experiments that prove their claims. The presented results show consistent improvements in terms of FID and actually some of the improvements reported are impressive" ]
[ -1, 6, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, 3, 3 ]
[ "HJlipkeGp7", "iclr_2019_HJE6X305Fm", "SygU5RN927", "Hkg6Ixk93Q", "rklFwIBK2Q", "iclr_2019_HJE6X305Fm", "iclr_2019_HJE6X305Fm" ]
iclr_2019_HJGciiR5Y7
Latent Convolutional Models
We present a new latent model of natural images that can be learned on large-scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization. To model high-resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks.
accepted-poster-papers
The reviewers are in general impressed by the results and like the idea but they also express some uncertainty about how the proposed actually is set up. The authors have made a good attempt to address the reviewers' concerns.
train
[ "r1xLI0153X", "B1xopr2HCQ", "S1x8opbLaX", "BJeIwpbITQ", "Skls46b8TQ", "B1et32fpnX", "r1gXKJZ6nX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "[Summary]\n- This work proposes a new complex latent space described by convolutional manifold, and this manifold can map the image in a more robust manner (when some part of the image are to be restored).\n\n[Pros]\n- The results show that the latent variable mapped to the image well represents the image, and it will be helpful for the image restoration problem.\n- it seems novel to adapt the idea of DIP for defining complex latent space.\n\n[Cons]\n- The main concern is that there is no guarantee that the defined latent space is continuous. \nIt means that it is difficult to judge whether the interpolated point (phi_in, s_in) between two points: (phi_1, s_1) and (\\phi_2, s_2), will be matched to the image distribution. \nEquation 2 in the paper seems that it just fit the generator parameter theta to map the phi_i and x_i and memorize the mapping between the training images and the given latent convolutional variables. \nIf the proposed algorithm just memorizes the training image and map them into given the latent convolution, the result cannot justify the proposal that the author proposes a new latent space.\n\n[Summary]\n- This work proposes an interesting idea of defining complex latent space, but It is doubtful that this work just memorized the mapping between the training images and the latent convolutional parameters.\n- I want to see the (latent space) interpolation test for the proposed latent convolutional space. If the author provides a profound explanation of the problem, I would consider changing the rating.\n\n--------------------------\nSee the additional comment for the changed rating\n", "I agree the comment from the author mostly for my concerns.\n\n(1) From the interpolated images, a point in the latent space seems to be matched to corresponding image in the image distribution, which means that it does not simply memorizes the images.\n\n(2) By seeing the figure 8, I think this work can be tested in image generation task, either. In final version, I strongly want to see the Pure Image generation result. \n\nBased on the comment, I changed my previous rating.\n", "Thank you for the careful review. Fortunately, your main concern though very grave is due to a very simple misunderstanding. We hope that once the misunderstanding is resolved, the rating may be reconsidered.\n\n\"Equation 2 in the paper seems that it just fit the generator parameter theta to map the phi_i and x_i and memorize the mapping between the training images and the given latent convolutional variables. \nIf the proposed algorithm just memorizes the training image and map them into given the latent convolution, the result cannot justify the proposal that the author proposes a new latent space.\"\n\nWe want to stress that all evaluations and qualitative examples are produced on the _hold-out_ test sets that were not in any way used to train the parameters theta of the generator network. So, we can very confidently say that the reason why the approach works is not memorization of the training set within theta. \n\n\"I want to see the (latent space) interpolation test for the proposed latent convolutional space.\"\n\nWe have added latent space interpolations to the appendix G (Figure 12) in the end of the paper. These interpolations were again done on a _hold-out_ set of images. The examples were ``cherry-picked'' for distinctiveness. In more details, in our (biased) view, LCM were always at least as good as other methods, but in some cases, e.g. for pairs of aligned perfectly frontal faces all interpolations look more or less the same, so we picked cases with clear difference between methods. Thank you for suggesting this comparison, it nicely illustrates the effect of the convolutional manifold constraint. If possible, please use zoom-in/large screen to view these results.\n\n\"..the interpolated point (phi_in, s_in) between two points: (phi_1, s_1) and (\\phi_2, s_2)..\"\n\nActually, the s vector is always fixed to some random noise value. I.e. it is not instance specific and is not modified by learning (one can add optimization over s, but in practice this does not change much).\n", "Thank you very much for the review.\nWe would like to point out that there are no encoder network in our approach (although one can possibly discuss ways to add it). Also, note that our contribution is not that we only increase the resolution of the latent space, but that we suggest a specific regularization of the latent space (the convolutional manifold) that significantly improves the generalizability of the resulting latent model.\n\n\"Are test images included in the training the convolutional networks?\"\nAll results (qualitative, quantitative, user study) are performed on hold-out sets, that were not used to train the parameters of the decoder (i.e. theta). The only exception is the progressive GAN baseline, for which there is a mix of training and test sets (since for the comparison we just reuse author-provided models trained on complete sets). This gives an advantage to the pGAN baseline (admittedly not a very big one, since GANs struggle to fit the training sets). To reiterate, all results of OUR method (LCM) are computed strictly on the hold-out test sets.\n\nTo train our model we use the Laplacian-L1 along with an MSE term with a weight of 1.0. We noticed that the MSE term speeds up convergence without affecting the results by much. The optimization is carried out using stochastic gradient descent with a learning rate of 1.0. We note that the code for the paper and the experiments will be released for reproducibility.\n", "Thank you for the careful review. Here are the responses.\n\n\"Did you try other standard restoration tasks, such as image denoising or deblurring? If not, do you think they would work equally well?\"\nWe have tried denoising (with synthetic noise), where the relative performance is similar. We have not tried deblurring, although we expect the relative performance. \n\n\"- A limitation (at least as presented) is that the corruption process has to be known analytically (as a likelihood objective) and must be differentiable for gradient-based inference.\"\nWhile technically we do assume that the corruption process is known, it is still possible to apply our approach with simplified (inaccurate) likelihood function. To show that we have added appendix H (Figure 13), which shows how restoration from heavy JPEG artifacts can be done using simple quadratic likelihood functional. The second limitation (need for optimization at test time) is indeed important. We can partially remedy it by adding encoder that would take a corrupted image and output a good starting point in a latent space. We have added discussion/acknowledgement of these limitations to the end of the conclusion section.\n\n\"- How dependent is the restoration result with respect to the initialization? For example, when starting gradient descent with the degraded image vs. a random image.\"\nOur approach cannot start with the degraded image, since we do not know the corresponding latent space initialization. So we always start with a random latent vector. Generally, we found that initializing the latent networks using the same parameters as when the training started worked the best (so we always use the same random vector). Different starting points lead to results with very slightly worse visual quality (the perceptual loss increases by about 0.0006), which are still better than that of competing methods. Note, that we experimented with different initializations for all the models and chose the one that worked the best for each (to give baselines a fair treatment).\n\n\"Roughly, how many iterations and runtime is needed for inference?\"\nFor a batch of 50 images, it takes about 1000-2000 iterations with takes between 6-12 minutes. Tasks like super-resolution can be done in about 1000 iterations or so and inpainting can take up to 1500-2000 iterations.\n\n\"- Did you try different optimizers, such as L-BFGS?\"\nYes, we have tried L-BFGS for inference. We had to use a lower learning rate and were able to produce results similar to that of SGD. Generally, L-BFGS did not offer any significant advantages over SGD. ", "This paper proposes to increase the latent space dimensionality of images, by stacking the latent representation vectors as a tensor. Then convolutional decoder and encoder networks are used to map the original data to latent space and vice versa. The learned latent representations can then be used in a universal framework for multiple tasks such as image inpainting, superresolution and colorization.\n\nThe idea of increasing the dimensionality of the latent space, although not sophisticated, seems to be performing very good. Indeed in some of qualitative experiments, the results are surprising. The authors should clarify that how is the training procedure performed in more details. Are test images included in the training the convolutional networks?", "# Summary\nThe paper proposes to embed natural images in a latent convolutional space of high dimensionality to obtain a universal image prior. Concretely, each image is embedded as a custom parameter vector of a CNN, which turns random noise into the input of a universal generator network to restore the image in pixel space.\nInference for image restoration is performed by minimizing the energy of a likelihood objective while constraining the latent representation of the restored image to be part of the learned latent space. Experiments for inpainting, super-resolution, and colorization are performed to evaluate the proposed method.\n\n# Positive\nAs mentioned in the paper, I agree that the idea of learning a universal image prior is appealing, since it can be applied to (m)any image restoration tasks without adjustment.\nI am not very familiar with the related work, but if I understood correctly, the paper seems to combine deep latent modeling (GLO, Bojanowski et al., 2018) and deep image priors (Ulyanov et al., 2018). The experiments show good results which qualitatively appear better than those of related methods. A user study also shows that people mostly prefer the results of the proposed method.\nDid you try other standard restoration tasks, such as image denoising or deblurring? If not, do you think they would work equally well?\n\n# Limitations\nWhile I agree that a universal image prior is valuable, the paper should (briefly) mention what the disadvantages of the proposed approach are:\n- A limitation (at least as presented) is that the corruption process has to be known analytically (as a likelihood objective) and must be differentiable for gradient-based inference.\n- Furthermore, the disadvantage of the universal prior as presented in the paper is that restoring an image requires optimization (e.g. gradient descent). In contrast, corruption-specific neural nets typically just need a forward pass to restore the image and are thus easier and faster to use.\n\n# Restoration inference\n- How dependent is the restoration result with respect to the initialization? For example, when starting gradient descent with the degraded image vs. a random image.\n- Roughly, how many iterations and runtime is needed for inference?\n- Did you try different optimizers, such as L-BFGS?" ]
[ 7, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2019_HJGciiR5Y7", "S1x8opbLaX", "r1xLI0153X", "B1et32fpnX", "r1gXKJZ6nX", "iclr_2019_HJGciiR5Y7", "iclr_2019_HJGciiR5Y7" ]
iclr_2019_HJGkisCcKm
A Universal Music Translation Network
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.
accepted-poster-papers
The paper describes a method which, given a music waveform, generates another recording of the same music which should sound as if it was performed by different instruments. The model is an auto-encoder with a WaveNet-like domain-specific decoder and a shared encoder, trained with an adversarial "domain confusion loss". Even though the method is constructed mostly from existing components, the reviewers found the results interesting and convincing, and recommended the paper for acceptance.
test
[ "r1gmLjY3hX", "HygwBMhu2m", "SJxSg-L2C7", "Skl9-YkXAm", "HklwF6N107", "HyxyWaNyRX", "Bkx-JhEJAQ", "HklhboVJRX", "H1lDy33g6m", "BJeNgRt7c7", "r1ecG0vl57" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "\nThe paper proposes a multi-domain music translation method. The model presents a Wavenet auto-encoder setting with a single (domain independent) encoder and multiple (domain specific) decoders. From the model perspective, the paper builds up on several exciting ideas such as Wavenet and autoencoder based translation models that can perform the domain conversion without relying on parallel datasets. The two main modifications are the use of data augmentation, the use of multiple decoders (rather a single decoder conditioned on the output domain identity) and the use of a domain confusion loss to prevent the latent space to encode domain specific information. This last idea has been also used on prior work.\n\nUp to my knowledge, this is the first autoencoder-based music translation method. While this problem is very similar to that of speaker conversion, modeling musical audio signal (with many instruments) is clearly more challenging. \n\nSummarizing, I think that the contributions in terms of methods are limited, but the results are very interesting. The paper gives an affirmative answer to the question of whether existing models could be adapted to handle the case of music translation, which is of value. The paper would be stronger in my view, if stronger baselines would be included. This would show that the technical contributions are better than alternative methods. Please read bellow some further comments and questions.\n\nThe authors perform two ablation studies: eliminating data augmentation and the domain confusion network. In both cases, the model without this add on fails to train. However, it seems to me that different studies are important. \n\nThe paper seems to be missing baselines. The authors could compare their work with that of VQ-VAE. The authors claim that they could not make VQ-VAE work on this problem. The cited work by Dieleman et al provides some improvements to adapt VQ-VAE to be better suited to the music domain. Did you evaluate also autoregressive discrete autoencoders?\n\nThe proposed method uses an individual decoder per domain. This is unlike other conversion methods (such as the speech conversion studied in VQ-VAE). This modification is very costly and provides a very large capacity. Have you tried having a single decoder which is also conditioned on a one-hot vector indicating the domain? Is it reasonable to expect some transfer between domains or are they too different? Maybe this is the motivation behind using many decoders. It would be good to clarify. \n\nI understand that the emphasis of this work is on music translation, however, the model doesn't have anything specific to music. In that regard, maybe a way to compare to VQ-VAE is to run the proposed method to the voice conversion of the VQ-VAE.\n\nHave you tried producing samples using the decoder in an unconditional setting?\n\nThe authors claim that the learned representation is disentangled. Why is this the case? Normally a representation is said to be disentangled if different properties are represented in different (disjoint) coordinates. I might not be understanding what is meant here.\n\nThe loss used by the authors, encourages the latent representation to not have domain specific information. The authors should cite the work [A], which has very similar motivation. It would be interesting to report the classification accuracy of the classifier to see how much of the domain information is left in the latent codes. Is it reduced to chance?\n\nIn Section 3.1 the authors describe some modifications to nv-wavenet. I imagine that this is because it leads to better performance or faster training. It would be good to give some more information. Did you perform ablation studies for these?\n\nIn the human lineup experiment (Figure 2 b,c and d). While the listeners fail to select the correct source, many of the domains are never chosen. This could suggest that some translations are consistently poorer than others or the translations themselves are poor. This cannot be deduced from this experiment. Have you evaluated this? Maybe it would be better to present pairs of audios with reconstruction and a translation. \n\nWhile I consider the results quite good, I tend to agree with the posted public comment. It is very hard to claim that the model is effectively transferring styles. A perceptual test should include the question: is this piece on this given style? As the authors mentioned, it is clearly very difficult to evaluate generative models. But maybe the claims could be toned down.\n\n[A] Louizos, Christos, et al. \"The variational fair autoencoder.\" arXiv preprint arXiv:1511.00830 (2015). ", "This paper talks about music translation using a WaveNet-based autoencoder architecture. The models are trained on diverse training sets and evaluated under multiple settings. What reported in this paper seems to be interesting and the performance sounds good. However, I have following comments/concerns. \n\n1. The paper is not clearly written. Its exposition needs significant improvement. There are numerous inconsistent definitions and vague descriptions that make the reading sort of difficult. \n a) It would be very helpful if the authors can put up a figure for the description of the WaveNet autoencoder instead of just using words in Section 3.1\n b) The paper itself should be self-contained instead of referring readers to other references for the details of model architectures.\n c) The math symbols are poorly defined. What is the definition of C in Section 3.3? It is defined or referred to as \"domain classification network\" and also \"domain confusion network\" but nowhere to find in Fig. 1.\n d) \"C is minimizes\" -> \"minimizes\"\n e) In Section 4, it says that \"Each batch is first used to train the adversarial discriminator\". Which adversarial discriminator? Where to find in Fig. 1 as it is the only description of the network architecture? \n\n2. The authors mentioned a couple of observations that left unanswered. \n a) I am surprised to see that without data augmentation, the training does not even converge. \n b) The conversion from unseen domains is more successful than the learned domains.\n c) The decoder starts to be creative when the size of the latent space is reduced. \n I sense that these observations seem to point to some (serious) generalization issues of the proposed model. I would like to hear explanations from the authors. \n\n\nAfter reading the rebuttal:\nThe authors have addressed my major concerns with regard to this paper. I have lifted my score. Thanks for the nice response.", "Thanks to the authors for the response which is nicely written with some follow-up experiments and explanation to clear my major concerns regarding this paper. I am willing to raise my score. ", "I'm happy to see the additional experiments on voice conversion, and I think it is commendable that the authors went through the trouble of adding these. They sometimes demonstrate some of the claims made in the paper more clearly, so this is a valuable addition (as is the comparison with a VQ-VAE baseline). I'm also happy to hear that the code will be released in full!\n\nI hope the other reviewers will take these extensive additions into account and consider revising their reviews accordingly.", "We are sorry for the inconsistencies that were identified in our writing. In the revised version we made the necessary corrections and address all requests for elucidation. \n\nReferring to the issues marked by (a)--(e) in the review:\n(a) We added a figure depicting the architecture of the wavenet autoencoder. \n(b) We added architectural details on the wavenet autoencoder.\n(c) We now term the network C “the domain classification network” and the loss term “the domain confusion term”.\n(d) Fixed.\n(e) By the adversarial discriminator, we meant the network C. The figure is updated.\n\nThe review points to three generalization-related phenomena that were mentioned in the paper, which seem unintuitive to the reviewer who asks for clarifications. \n\n(a) Training without data augmentation: While the idea of a denoising autoencoder is not new, this might be the first time in which it is demonstrated that without the added noise, training would fail. To show that this is indeed the case and not an issue with the experiments, we have trained with and without this term for the problem of voice conversion. As noted by R1, this problem is easier than music conversion, and it is, therefore, a convenient testbed.\n\nThe details are provided in the new Appendix A, where we also report the MOS score of each model, as obtained with the CrowdMOS package. As can be seen, our method - trained with data augmentation, is better at converting a previously unseen speaker.\n\nIn music, we verified again that training without augmentation in unstable. Specifically, without it, we were unable to balance between the domain classification network C and the encoder E, since E is very strongly inclined to memorize the data.\n\n(b) Unseen domains translated better than seen ones: While the training domains used are all classical music, the out of domain examples are more exotic (Swing Jazz, metal guitar riffs, and instrumental Chinese music). Translation, in this case, is often less expected and seems to be more impressive to humans. We hypothesize that this is the reason why under this scenario human raters tend to provide higher translation-success MOS for our output.\n\n(c) Bottleneck effect on the latent space: As requested by the reviewer, we performed an ablation study on the latent code size. Samples are available as a new supplementary - S6, in which we convert a simple MIDI clip to Beethoven domain and Bach domain. \n\nAs can be heard, a latent dimensionality of 64 tends to reconstruct the input (unwanted memorization). A model with a latent space of 8 (used in S2) performs well. A model with a latent dimensionality of 4 is more creative, less related to the input midi, and also suffers from a reduction in quality.\n", "Thank you for your very constructive comments.\n\nIn the revised version, we directly compare our method to the VQ-VAE baseline for a related task for voice conversion, which as you noted, is somewhat easier than music conversion and is also where VQ-VAE was applied to. It is, therefore, a convenient test bed when comparing to the VQ-VAE method, which, did not perform well in our music-based experiments. As can be seen in Tab 2 of Appendix A, samples generated by our method are of higher quality than those of VQ-VAE.\n\nThe idea of using a single decoder, which is conditioned on the output domain, is very attractive and has not escaped us. In addition to training fewer networks, it would also support performing domain arithmetics, i.e., creating new domains by combining the conditioning parameters. We started with such experiments as soon as the multi-decoder architecture started to produce results and have invested a considerable effort into this. \n\nUnfortunately, our single-decoder network was never shown to be viable. We attempted multiple conditioning methods, gradually adding to the amount of conditioning from the last layers to earlier and earlier layers. In all cases, the network ignored the domain-conditioning parameters. Our hypothesis is that when trained using teacher forcing, the information on the domain is easily obtained from the previous time frames, causing this neglect. We tried to overcome this by introducing additional loss terms but were not yet successful.\n\nDisentanglement: We thank the reviewer for making this point and this is now clarified in the revised version. We meant to say that the representation disentangles (makes independent, detaches) the domain information from the other modeling aspects.\n\nWe have added to [A], which indeed addresses a similar motivation. Their solution is based on the Maximum Mean Discrepancy (MMD), which we do not use.\n\nThe paper was updated with Fig 4, depicting the discriminator confusion and accuracy. As can be seen in the confusion matrices, the domain classification network does not do considerably better than chance when the networks converge.\n\nModifications to nv-wavenet: the NVIDIA kernels implement the architecture of BAIDU, which performs very well for speech. However, in our experiments (in PyTorch), it did not perform as well as the NSynth architecture, which we end up using. We did not find a way to fix this without changing the architecture. Since we did not continue training with the BAIDU architecture, we feel uncomfortable to execute a full blown MOS experiment on this. We are releasing our modified kernels in order to provide the additional option to other researchers.\n\nLineup experiments: The domain for which the MOS is highest among the six domains is the Mozart symphony domain, which is often selected when the origin is falsely identified. However, audio fidelity does not fully explain the preference of one domain over another in our lineup experiments. The next-best domain, Bach organ (#4), achieved very similar MOS to Mozart symphonies (0.06 difference), yet it was chosen less often than Bach cantatas (#3). #3 achieved the second-lowest MOS. We hypothesize that there are many biases at play. One reason might be the relative complexity --- due to the way computer-generated music is perceived, people tend to assume that “simpler” domains are the outcome of the translation. Additionally, when we output domains that contain singing, it is sometimes possible to note that the singing is not comprised of words. \n\nReviewer: While I consider the results quite good, I tend to agree with the posted public comment. It is very hard to claim that the model is effectively transferring styles. A perceptual test should include the question: is this piece on this given style? As the authors mentioned, it is clearly very difficult to evaluate generative models. But maybe the claims could be toned down.\n\nAuthors: Since our training data was from a limited number of styles (especially now that we moved to publicly available data), the question to which of the domains the generated audio belongs to would be answered in perfect accuracy. This, however, should be credited to WaveNet more than to us --- the decoders are able to generate distinguishable music in the given domains.\n\nIf the reviewer means that we should ask “does this piece constitute a perfect, artifact-free, sample from this domain”, then this is similar to what we try to answer with the quality MOS experiments. \n", "We thank the reviewer for the thorough review and kind comments.\n\nWe have corrected the writing inconsistency of autoregressive and the “out off tune” typo.\n\nSince a single-decoder, in addition to training fewer networks, would also support the creation of new domains by changing the domain-specific parameters, we invested quite a lot in this direction. Unfortunately, our single-decoder network was never shown to be viable. We attempted multiple conditioning methods, gradually adding to the amount of conditioning from the last layers to earlier and earlier layers. In all cases, the network ignored the domain-conditioning parameters. Our hypothesis is that when trained using teacher forcing, the information about the domain is easily obtained from the previous time frames, causing the observed neglect of the domain conditioning. We tried to overcome this by introducing additional loss terms but were not yet successful.\n\nIn the revised version, we directly compare our method to the VQ-VAE baseline for a related task for voice conversion. As noted by R1, voice conversion, which is where VQ-VAE was applied before, is not as challenging as music conversion of the type we perform. It is, therefore, a convenient test bed when comparing to the VQ-VAE method, which did not perform well in our music-based experiments.\nThe reviewer also mentioned training without augmentation, which, as reported, failed in music. Here, too, voice conversion is a good testbed since it is somewhat easier. We were able to successfully train our network on this task with and without data augmentation.\nBoth voice conversion experiments are reported in Appendix A of the revised manuscript. The advantage of using pitch augmentation is clear, and so is the performance gain in comparison to VQ-VAE.\n \nOur entire code, including the modified nv-wavenet inference kernels, will be released in full.\n\nAmazon turkers were biased to always pick the same domain as the original source. We have made this clearer in the revision.\n\nBlending Experiments: We updated our blending experiments following your review. Following the comment, we added to S5 blending done in the WAV domain. Indeed, as you anticipated, for music conversion the differences are small. Still, latent blending probably adds the ability to blend inputs from two different domains (untested).\n\nIn order to emphasize the difference between wav-domain and latent-domain blending, we added blending experiments on the voice conversion task. The blended voice samples show a clear difference between the two. Samples blended in WAV space depicts a “cross-fading” effect i.e. a dominant speaker and a quite speaker are heard simultaneously. In contrast, blending in the latent space creates the effect of natural-sounding mumbling of a single speaker.\n", "\nWe thank our colleagues for the detailed comments and for their support. All reviewers seem to agree that the reported results are interesting and performance is good. \n\nVQ-VAE\n=======\nWhile we report unsuccessful efforts of employing the VQ-VAE method, both R1 and R3 would like to understand this more. As noted by R1, voice conversion, which is where VQ-VAE was applied before, is not as challenging as music conversion of the type we perform. It is, therefore, a convenient testbed when comparing to the VQ-VAE method, which did not perform well in our music-based experiments, and which was shown by the authors to work on voice conversion.\n\nThe experiments in which we compared our method to VQ-VAE were performed on three publicly available datasets - “Nancy” from Blizzard 2011, Blizzard 2013 and LJ dataset. We used on out of domain source samples by converting the Google Cloud TTS robot to these three voices (which effectively creates a TTS pipeline for these voices). The models are evaluated by their quality using the Mean Opinion Score. As can be seen in the table below, samples generated by our method are of higher quality than those of VQ-VAE.\n\nThe reviewers were also curious about training without augmentation, which, as reported, failed in music. Here, too, voice conversion is a good testbed since it is somewhat easier. For this task, we were able to successfully train our network with and without data augmentation. As can be seen in the table, our method -- trained with data augmentation, is better at converting a previously unseen speaker.\n\n+----------------------------------------------------+------------------+----------------+---------------+\n| | Blizzard2013| Nancy | LJ |\n+----------------------------------------------------+-------------- ---+----------------+---------------+\n| Our method | 3.16+-0.79 | 3.85+-0.84 | 3.40+-0.77 |\n| Our method - w/o augmentation | 3.07+-0.79 | 3.87+-0.85 | 2.85+-0.92 |\n| VQ-VAE | 2.53+-0.95 | 2.92+-0.92 | 2.22+-0.96 |\n+----------------------------------------------------+------------------+---------------+----------------+\n\nAdditionally, following the reviews, we updated our supplementary (MusicTranslation.github.io) with:\n1. An ablation study on the size of the latent code, as requested by R2, is added as S6.\n2. Samples generated via Unconditional Generation, as requested by R2, are added as S7.\n3. Blending samples for voice and blending in the WAV domain were added to S5, following a comment by R3.\f", "A method is presented to modify a music recording so that it sounds like it was performed by a different (set of) instrument(s). This task is referred to as \"music translation\". To this end, an autoencoder model is constructed, where the decoder is autoregressive (WaveNet-style) and domain-specific, and the encoder is shared across all domains and trained with an adversarial \"domain confusion loss\". The latter helps the encoder to produce a domain-agnostic intermediate representation of the audio.\n\nBased on the provided samples, the translation is often imperfect: the original timbre often \"leaks\" into the output. This is most clearly audible when translating piano to strings: the percussive onsets of the piano (due to the hammers hitting the strings) are also present in the translated audio, even though instruments like the violin and the cello are not supposed to produce percussive onsets. This gives the result an unusual sound, which can be interesting from an artistic point of view, but it is undesirable in the context of the original goal of the paper.\n\nNevertheless, the results are quite impressive and for some combinations of instruments/styles it works surprisingly well. The question of whether the approach is equivalent to pitch estimation followed by rendering with a different instrument is also addressed in the paper, which I appreciate.\n\nThe paper is well written and the related work section is comprehensive. The experimental evaluation is thorough and extensive as well (although a few potentially interesting experiments seemingly didn't make the cut, see other comments). I also like that the authors went through the trouble of doing some experiments on a publicly available dataset, to facilitate reproduction and future comparison experiments.\n\n\nOther comments:\n\n* \"autoregressive\" should be one word everywhere\n\n* In section 2 it is stated that attempts to use a unified decoder with style/instrument conditioning all failed. I'm curious about what was tried specifically, it would be nice to discuss this.\n\n* The same goes for experiments based on VQ-VAE, the paper simply states that they were not able to get this working, but not what experiments were run to come to this conclusion.\n\n* The authors went through the trouble to modify the nv-wavenet inference kernels to support their modified architecture, which I appreciate -- will the modified kernels be made available as well?\n\n* The audio augmentation by pitch shifting is a surprising ingredient (but according to the authors it is also crucial). Some more insight as to why this is so important (rather than simply stating that it is important) would be a welcome addition.\n\n* Section 3.2: \"out off tune\" should read \"out of tune\".\n\n* The formulation on p.7, 2nd paragraph is a bit confusing: \"AMT freelancers tended to choose the same domain as the source, regardless of the real source and the presentation order.\" Does that mean they got it right every time? I suspect that is not what it means, but that is how I read it initially.\n\n* I don't quite understand the point of the semantic blending experiments. As a baseline, the same kind of blending in the raw audio space should be done, I suspect it would probably be hard to hear the difference. This is how cross-fading is already done in practice, and it isn't clear to me why this method would yield better results in that respect. The paper is strong enough without them so these could probably be left out.", "Thank you very much for your detailed comment!\n\n1. Augmentation code snippet is below.\n\nclass WavFrequencyAugmentation:\n def __init__(self, wav_freq, magnitude=0.5):\n self.magnitude = magnitude\n self.wav_freq = wav_freq\n\n def __call__(self, wav):\n length = wav.shape[0]\n perturb_length = random.randint(length // 4, length // 2)\n perturb_start = random.randint(0, length // 2)\n perturb_end = perturb_start + perturb_length\n pitch_perturb = (np.random.rand() - 0.5) * 2 * self.magnitude\n\n ret = np.concatenate([wav[:perturb_start],\n librosa.effects.pitch_shift(wav[perturb_start:perturb_end],\n self.wav_freq, pitch_perturb),\n wav[perturb_end:]])\n\n return ret\n\n2. We picked classical music because we feel that it is more straightforward to define and evaluate translating one domain to another within classical music. \n\nWe do not focus on single instruments and, in our experience, multi instrument domains do not pose a challenge. Our second network is trained on more single instrument-domains than our previous trained network, and that is because these were the domains found in the open MusicNet. \n\nIn particular, our first network was trained on three multi instrument domains out of six total:\n(i) Mozart's symphonies conducted by Karl B\"ohm, (ii) Haydn's string quartets, performed by the Amadeus Quartet, and (iii) J.S Bach's cantatas for orchestra, chorus and soloists. The second network was trained on two (out of six) domains that are multi instrument: (iii) Cambini's Wind Quintet, (vi) Beethoven's string quartet.\n\nIt would be interesting to train on Pop CDs, however it requires proper screening in order to separate to relatively homogeneous styles. It would also be interesting to train on non-classical instrumental music, such as Jazz.\n\n3. A. human line up experiment -- The goal in this experiment is to evaluate the believability of our translations. With perfect conversions, it would be impossible to identify which is the source material and which is the translated output.\n3. B NSynth -- The correlation visualization serves to show that our encoder produces semantic embedding vectors when given out-of-training music data as input. A similar text exists in https://arxiv.org/abs/1711.00937 section 4.3 bottom, where an extremely simple phoneme classification scheme is built on top of the latent encodings, and its accuracy is measured. Our visualization shows that musical content is more prominent within the latent encodings than domain-specific content, at least as far as the correlation operator is able to detect.\n\n4. It is true that our current system does not seem to change note timing. However, this does not mean that it does no more than timbral transfer. Changing only the timbre means keeping the pitches as they are; a system that modifies the pitches, e.g by adding more musical notes, cannot be said to do timbral transfer only. We show many samples where there are more notes in the output than in the input. See sample #3, a cello converted to violin and piano. The cello music is converted to piano, and violin notes are added. In sample #9, piano to cello, the right-hand piano part is discarded, and the remainder is converted to cello. Since our system can be observed to add and remove musical notes, it cannot be said to only be doing timbral transfer.\n\nOf course, there are limits to our abilities to mimic the output domains exactly, particularly in out-of-training-distribution domains such as the opera sample, since there are no singers in MusicNet.\n\n5. We tried to be very clear in the paper on what we mean by universality, as is also reflected in the comment. Informally, it means that the network can take any audio input and return an output that is both highly relevant to the input and is in the desired output domain. We demonstrate this ability on extreme inputs. \n\nThe question raised in the comment is how to evaluate this ability, which is a major concern in all generative work, not just in music, especially in perceptual domains such as vision, voice, etc. We could answer this using MOS scores (let us know if needed, we’d be happy to collect), but without access to good baselines to compare against, we are confident enough with the results to simply let the readers decide for themselves.\n", "Thanks for a nice work. It seems an obvious improvement for me. I also appreciate the detailed subjective evaluation. I have several questions and comments as below.\n\n1. In Section 3.2, what does \"and modulate its pitch by a random number between -0.5 and 0.5 of half-steps\" mean precisely?\n\n2. In Section 4. Experiments, I would like to hear why all the domains are classical music. Most of each domain consists of single instrument, which would be nice to be mentioned in the paper because - it is probably easier to learn certain instrument than learn the styles of composers, which is better to be elaborated. \n\n3. In Section 4.1 Lineup experiment -- \"we evaluate the ability of persons to identify the source musical segment from the conversions.\" and also NSynth pitch experiments → What are the goals of these evaluations? What'd be good/bad? What are expected/hoped? \n\n4. In Section 4.2 Are we doing more than timbral transfer? -- \"or can it capture stylistic musical elements\" → Unlike the former phrase, (\"pitch estimation followed by rendering...\"), this is quite unclear; 'more than timbral transfer' can mean a wide variety of change; therefore a clear description can be helpful here. \nAn apparent 'style' transfer that is beyond timbral transfer would be some changes over time, e.g. rhythmic change. However, the observed cases mentioned in the same paragraph sound like they're still limited within some changes along frequency axis, which is not too wrong to be (somewhat roughly) said as a timbral transfer (which is still not bad at all). \n\nActually, on regarding \"There are many samples where it is clear that more than timbral transfer is happening.\", there are also samples where the network is merely achieving the \"pitch estimation followed by rendering...\". For example, in S1 - Opera-to-Bach_solo_piano, the network fails to capture the onsets of notes (there are much more notes in the transferred example). More importantly, it does not sound like \"Bach piano\", it only sounds like \"piano\". I think this is a major difference and should be mentioned somewhere in the paper.\n\n5. In Section 4.2 Universality -- Without further analysis, I don't think we can call the network has universality in transferring from unseen instrument (or sound e.g. clapping); that we have some 'result' from unseen instrument doesn't make it universal. It would be very hard to define what is desired in the output when input is something unseen (and clapping could be an extreme case which we might even not need to consider); so how could we interpret it as a proof of universality? The experiment was done with somehow *universal input types*, which is really interesting. But the result doesn't seem to justify any universality of the transfer.\n" ]
[ 7, 6, -1, -1, -1, -1, -1, -1, 8, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2019_HJGkisCcKm", "iclr_2019_HJGkisCcKm", "HklwF6N107", "Bkx-JhEJAQ", "HygwBMhu2m", "r1gmLjY3hX", "H1lDy33g6m", "iclr_2019_HJGkisCcKm", "iclr_2019_HJGkisCcKm", "r1ecG0vl57", "iclr_2019_HJGkisCcKm" ]
iclr_2019_HJGven05Y7
How to train your MAML
The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem.Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++.
accepted-poster-papers
This paper proposes several improvements for the MAML algorithm that improve its stability and performance. Strengths: The improvements are useful for future researchers building upon the MAML algorithm. The results demonstrate a significant improvement over MAML. The authors revised the paper to address concerns about overstatements Weaknesses: The paper does not present a major conceptual advance. It would also be very helpful to present a more careful ablation study of the six individual techniques. Overall, the significance of the results outweights the weaknesses. However, the authors are strongly encouraged to perform and include a more detailed ablation study in the final paper. I recommend accept.
train
[ "Byg5xz2rCX", "Hkg2spUSRQ", "H1eHVoEaaX", "H1ewhME6TQ", "BklkMRGpam", "Skg14bX92X", "rJg-0D-927", "rJxOlkMPh7", "HJgieSXZ5X", "rJlF4Lf-97", "HkgRQ8EAtQ", "rke46mAatQ" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "Thanks for your prompt response. I think your point about the _automation_ of things is correct. I will amend the paper to be more precise in that claim as per your request. Regarding the automation of additional parts of the system, I am currently working on that, but it felt like it exceeded the scope of this paper hence breaking it into smaller easier to digest papers, that tackle one thing at a time. In my experience, papers that try to do too many things at once are often incredibly hard to write, and even harder to read. \n\nI will modify the particular claim shortly. Thanks for your time.", "< The alpha also includes a sign. >\nOk that makes sense. It might be worth adding a sentence that says this (if there isn't one already).\n\n< Thus, random initialization suffices for that aspect, which does reduce the need for explicitly choosing a learning rate. >\nOk, so you're saying that one of maml's hyperparameters is now a set of less-sensitive hyperparameters. That sounds useful, but it's very different from from the claim you make in the paper that maml++ gives \"automatic learning for most of the system’s hyperparameters\". There are two problems with this claim\n\n1.) As far as I see, the only thing you've _automated_ is the setting of the inner loop learning rate, and in so doing you added more hyperparameters that need to be set. It's good they're not so sensitive, but they still have to be set. It's also good that your settings make the system overall easier to optimize, but that's not the same as automation.\n2.) You haven't gotten rid of \"most\" of the hyperparameters. There's still the outer loop learning rate and the other optimizer hyperparameters (e.g. \\beta_1 and \\beta_2 in Adam). In the most generous interpretation, you've made half of the hyperparameters less sensitive. Additionally, all of the architecture hyperparameters e.g. number of layers, number of units per layer, etc etc still need to be set by the user.\n\nOverall, this seems to me like a significant over-claiming issue. Replacing the language about \"automating most hyperparameters\" with something about \"reducing inner loop hyperparameter sensitivity\" would be sufficient.", "Thank you for taking the time to review our paper. Before I start delving into the technical aspects of this response. To address your concerns, I will use an enumeration that matches the indexes of your concerns.\nThe paper is indeed targeted towards a particular class of algorithms. That class being end-to-end differentiable gradient-based meta-learning. MAML and Meta-learner LSTM [1] are two instances of that particular class of algorithms. Our proposed techniques can be applied to any algorithm of that class, given that they utilize inner-loop optimization processes as part of their learning. So, even though this work is indeed targeted towards a particular class of models, that class is general enough and applicable to enough domains that we felt that an investigation of the type presented in this paper was necessary. In fact, the work in this paper was the result of the first author’s attempts to build systems that learn various other components (i.e. instead of just learning a highly adaptable parameter initialization, he was attempting to learn loss functions/update functions and dynamic generation of parameter initializations given a task among others). What he realized, however, was that MAML was really hard to actually work with, being very inflexible to architecture configuration, causing gradient degradation problems, instability in training and requiring lots of manual inner loop learning rate tuning. In attempting to fix those problems, so he could build on top more complicated systems, this paper came to be. \nIn MAML, the resulting inference model is effectively an unrolled 5 layer network over N steps. If that N=5, then the resulting model has a depth of effectively 25 layers. In standard deep networks, gradient degradation can be greatly reduced or altogether removed via the usage of skip-connections. Since in MAML we can’t really apply skip-connections from a subsequent model to a previous one (because that would further complicate the gradients), we decided that the best way to inject clean/stable gradients to all iterations of the network would be to use 2 losses for each step-wise network. One loss, providing an implicit gradient, coming from subsequent iterations of the network (i.e. the original MAML loss), and another per-step loss, providing an explicit gradient, coming directly from evaluating the model on the target set. This way, every network iteration receives stable gradients which keep the network stable during the early epoch training. Eventually, the importance of earlier steps becomes 0, which means that the original MAML loss is used instead. However, since the network has already learned a stable parameterization, the stability remains throughout training (we empirically confirmed this).\nWe conducted an ablation study on 20-way 1-shot Omniglot, as shown in table 2. We did want to conduct even more exhaustive ablation studies across all Omniglot and Mini-Imagenet tasks, however, due to computing constraints we had to restrict ourselves. Using the “hardest” Omniglot 20-way 1-shot task as the ablation study’s subject seemed like a sensible thing to do since it was cheaper computationally, but “hard” enough for the results to generalize well in other tasks.\nIndeed, annealing various components is not as novel as some of the other proposals in the paper. However, since this paper was essentially an engineer’s handbook on how to train MAML-like models, we felt that people should be aware of the effect those techniques have on the system’s performance.\nIndeed, there is other literature on meta-learning learning rates. Our approach’s novelty lies in learning “per-step” and “per-layer” learning rates. By being able to learn per step learning rates, we allow the network to choose to decrease or increase it’s learning rates at each step, to minimize overfitting. Another interesting phenomenon, that we will address in a future blog post, is the fact that across all networks, we noticed that particular layers choose to “un-learn” (flipping the direction of the learning rate) at particular steps. We theorize that the network might be attempting to remove some existing knowledge to replace it with new knowledge, or using forgetting as a way to steer gradients for more efficient learning.\n\nRegarding the minor concerns, yes, we will fix the referencing inconsistencies and the batch size indexing problem.\n\nOnce again, I want to thank you for taking the time to review our work.\n\n1. Ravi, S. and Larochelle, H. (2016). Optimization as a model for few-shot learning.\n", "Thanks for taking the time to review our paper. Further thanks for your very detailed, useful and constructive comments. We will now address your concerns below in the same order they were made:\n\nWe claim that we reduce the hyperparameter choices needed because once our methodologies are applied exactly as proposed, the resulting system will achieve very high generalization and fast convergence without any additional tuning. We have attempted to initialize the learning rates from a random uniform distribution (ranging from 0.1 to 0.01) in addition to initializing manually. Both methods, interestingly, converge to very similar learning rates. Thus, random initialization suffices for that aspect, which does reduce the need for explicitly choosing a learning rate.\nRegarding the gradient directions. The alpha also includes a sign. So, in other words, the alpha also learns the direction of the learning rate, hence our claim. In fact, an interesting finding is that, in specific steps and layers, the network chooses to “unlearn” or flip the sign of the learning rate. Further investigation is required to understand this behavior, but a current working hypothesis is that the network is trying to “forget” particular parts of its weights, which somehow produces more efficient learning, in subsequent steps. We will further expand on this in a future blog post. \n\nAll of your suggestions and typo-locations are spot-on and we will take care to address all of those in the final version of the paper. Again, we really thank you for providing such a detailed and constructive review.\n\n", "Thank you for your review. \n\nRegarding the conceptual and technical novelty concerns.\n\nTo clarify, our main contribution comes in the form of carrying an investigation on how MAML can be stabilized and how the model can be modified such that it can consistently achieve faster convergence and strong generalization results without any hyperparameter tuning required. Then, once the investigation is completed and key problem-areas isolated, we use our investigation insights to improve the system. In fact, the whole reason for doing this was because we attempted to built new research ideas on top of MAML only to find out just how sensitive and unstable the system was. Therefore, we decided that finding the issues and fixing them would enable researchers working on gradient-based end-to-end meta-learning, such as MAML or Meta Learner LSTM [1] to concentrate on the new approach they want to build rather than trying to overcome instability issues of the base methodology. Furthermore, the industry would also benefit from this, as they would have an easier time training MAML based models. \n\nMost of the proposed approaches are novel and non-obvious (i.e. LSLR, BNWB+BNRS, and multi-step loss optimization). Overcoming gradient degradation issues by utilizing multi-step target-loss optimization which is annealed over time, is in our knowledge, done for the first time in this work. Furthermore, we provide novel contributions in the form of learning things “step-by-step”.\n\nFor example, we propose that learning per-layer, per-step learning rates would benefit the system, more so than just learning per-layer learning rates and sharing them. The reason is that the model would be free to choose to decrease its learning rate or otherwise change it from step to step to reduce overfitting. This technique is both novel and non-obvious. Furthermore, LSLR is not something that is possible in standard deep learning, as learning the learning rates would require an additional level of abstraction (thus entering the meta-learning arena). \n\nAnother contribution with significant novelty comes in the form of proposing a step-by-step batch norm variant, designed for meta-learning systems that require inner loop optimization. Learning batch norm parameters for every step, as well as collecting per-step running statistics speeds up the system and allows batch normalization to truly work in this setting, whereas the previous variant of batch norm used, constrained things further, instead of achieving the improved convergence and generalization that batch norm can achieve in standard deep learning training setups. \n\nThe rest of the contributions, such as annealing the derivative order and using cosine scheduling for Adam are less novel, but nonetheless important to investigate. We show from our experiments that those approaches can improve the system, something which was previously unconfirmed. \n\nThe comparative performance (between MAML and MAML++) both in convergence speed and final generalization is significant and produces state of the art results. Furthermore, that performance is achieved far more consistently and with more stability across architectures. We hold the belief that the community would really benefit from this work, hence why we submitted it.\n\n1. Ravi, S. and Larochelle, H. (2016). Optimization as a model for few-shot learning.\n", "[Summary]\nThis work presents several enhancements to the established Model-Agnostic Meta-Learning (MAML) framework. Specifically, the paper starts by analyzing the issues in the original implementations of MAML, including instability during training, costly second order derivatives evaluation, missing/shared batch normalization statistics accumulation/bias, and learning rate setting, which causes unstable or slow convergence, and weak generalization. The paper then proposes solutions corresponding to each of these issues, and reports improved performance on benchmark datasets. \n\nPros\nGood technical enhancements that fix some issues of a popular meta-learning framework\nCons\nLittle conceptual and technical novelty \n\n[Originality]\nThe major problem I found in this work is the lack of conceptual and technical novelty. The paper basically picks up some issues of the well-established MAML framework, and applies some common practices or off-the-shelf technical treatments to fix these drawbacks and improve the training stability, convergence, or generalization, etc. E.g., it seems to me that the most effective enhancement comes from the use of adoption of learning rate setting (LSLR), or variant version of batch normalization (BNWB+BNRS) in Table 1, which have been the standard tricks to improve performance in the deep learning literature. Overall, the conceptual originality is little. \n\n[Quality]\nThe paper does get most things well executed from the technical point of view. There does not seem any major errors to me. The results reported are also reasonable within the meta-learning context, despite lack of originality. \n\n[Clarity]\nThe paper is generally well written and I did not have much difficulty to follow. \n\n[Significance]\nThe significance of this work is marginal, given the lack of originality. The technical enhancements presented in the paper, however, may be of interest to people working in this area. \n", "In the work, the authors improve a simple yet effective meta-learning algorithm called Model Agnostic meta-learning (MAML) from various aspects including training instability, batch normalization etc. The authors firstly point out the issues in MAML training and tackle each of the issue with a practical alternative approach respectfully. The few-shot classification results show convincing evidence.\n\nSome major concerns:\n1. The paper is too specific about improving one algorithm, the scope of the research is quite narrow and I'm afraid that some of the observations and proposed solutions might not generalize into other algorithms;\n2. Section 4, \"Gradient Instability → Multi-Step Loss Optimization.\" I don't see clearly why the multi-step loss would lead to stable gradients. It causes much more gradient paths than the original version. I do see the point of weighting the losses from different step;\n3. The authors should have conducted careful ablation study of each of the issues and solutions. The six ways of proposed improvements may make the the performance boost hard to understand. It would help to see which way of the proposed improvement contribute more than others;\n4. Many of the proposed improvements are essentially utilizing annealing mechanisms to stabilize the training, including 1) anneals the weighting of the losses from different step; 2) anneal the second derivative to the first derivative;\n5. For the last two improvements about the learning rate, there are dozens of literature on meta-learning learning rate and the proposed approach does not seem to be novel; \n \nMinors\n1. The reference style is inconsistent across the paper, sometimes it feels quite messy. For example, \"Batch Stochastic Gradient Descent Krizhevsky et al. (2012)\" \"Another notable advancement was the gradient-conditional meta-learner LSTM Ravi & Larochelle (2016)\";\n2. Equation (2) (3) the index b should start from 1, size of B should be 1 to B;\n", "Paper summary - This paper provides a bag of sensible tricks for making MAML more stable, faster to learn, and better in final performance.\nQuality - The quality of the work is strong: the results demonstrate that tweaks to MAML produce significant improvements in performance. However, I have some concern that certain portions of the text overclaim (see concerns section below).\nClarity - The paper is reasonably clear, with some exceptions (see concerns section).\nOriginality - The techniques described in the paper range from only mildly novel (e.g. MSL, DA), to very obvious (e.g. CA). Additionally, the paper's contributions amount to tweaks to a previously existing algorithm. \nSignificance - The quality of the results make this a significant contribution in my view.\nPros - Good results on a problem/algorithm of great current interest.\nCons - Only presents (in some cases obvious) tweaks to a previous algorithm; clarity and overclaiming issues in the writeup.\n\nConcerns (please address in author response)\n- The paper says \"we … propose multiple ways to automate most of the hyperparameter searching required\". I'm not sure that this is true. The only technique that arguably removes a hyperparameter is LSLR. Even in this case, you still have to initialize the inner loop learning rates, so I'm not convinced that even this reduces hyperparameters. Perhaps I've missed something, please clarify.\n- Section 4's paragraph on LSLR seems to say that you have a single alpha for each layer of the network. If this is right, then saying your method has a \"per layer gradient direction\" is very confusing. Each layer's alpha modulates the magnitude of that layer's update vector, but not its direction. The per-layer alphas together modify the direction of the global update vector. Perhaps I've misunderstood; equations describing exactly what LSLR does would be helpful. In any case, this should be clarified in the text.\n\nSuggestions (less essential than the concerns above)\n- The write-up is redundant and carries unnecessary content. The paper would be better shorter (8 pages is not a minimum :)\nSection 1 covers a lot of background on the basics of meta-learning background that could be skipped. Other papers you cite (e.g. the MAML paper cover this). \n - Section 2 goes into more detail about e.g. matching nets than is necessary. \n - Section 2 explains MAML, which is then covered in much more detail in Section 3; better to leave out the Section 2 MAML paragraph. \n - Sections 3 and 4 are very redundant. Combine them for a shorter (i.e., better!) paper.\n- The paper says, \"Furthermore, for each learning rate learned, there will be N instances of that learning rate, one for each step to be taken. By doing this, the parameters are free to learn to decrease the learning rates at each step which may help alleviate overfitting.\" Does this happen empirically? Space could be freed up (see above) to have a figure showing whether or not this happens.\n- The paper says, \"we propose MAML++, an improved meta-learning framework\" -- it's a little too far to call this a new framework. it's still MAML, with improvements.\n\nTypos\n- \"4) increase the system’s computational overheads\" -> overhead\n- \"composed by\" -> composed of\n- \"Santurkar et al. (2018).\", \"Krizhevsky et al. (2012),\", \"Finn et al. (2017) \" -> misplaced citation parens\n- \"a method that reduce\" -> reduces\n- \"An evaluation ran consisted\" -> evaluation consisted\n- The Loshchlikov and Hutter citation in the bibliography isn't right. It should be \"Sgdr: Stochastic gradient descent with restarts.\" (2016) instead of \"Fixing weight decay regularization in adam\" (2017).\n", "Meta-SGD learns alphas of dimensionality equal to the network parameters. Instead with LSLR we propose learning one alpha for each layer of the network. A component qualifies as a layer if it has learnable weights or biases in it. In addition, instead of just learning a learning rate and direction (alpha) for each layer to be used across all inner loop steps, we instead propose to learn different alphas for each inner loop step. This allows the network to choose to decay its alphas or otherwise change them, to maximize generalization performance (in some cases we noticed the network choosing to unlearn for some inner loop steps by using a negative learning rate and learn in others). So, to summarise, we learn one learning rate and direction for each layer for any given inner loop step. The network we used had 4 CNN layers along with a final softmax. That's a total of 5 layers, but since we learn learning rates for weights and biases separately, this means that the model learns a total of 10 learning rates and directions for any given step. For example, in the case where the model takes 5 inner loop steps, we have a total of 5 x 10 = 50 learning rates and directions, which is represented by 50 learnable parameters in the system. ", "When describing LSLR you likened your method to Meta-SGD, but in Meta-SGD the gradient direction is represented by the optimizer parameters \\alpha which has the same dimensionality as the learner parameters \\theta. In your method you claim that you reduce computational costs by learning \"per layer per step\" learning rates and directions. Can you please clarify how are your directions represented if not with the same number of parameters as used in Meta-SGD?\n", "Thanks for your comment. Firstly, I'll reiterate that the main point of the paper is to improve MAML as a model itself. Furthermore, we did a very thorough literature review but missed out on the papers you have stated. The work in our paper had already taken full shape in May thus meaning that works 1 and 3 (that came later) escaped our radar. The second paper you mentioned, \"Neural Attentive Meta Learner\" was not included in many of the latest few-shot learning papers that came out in June 2018, thus making it harder for us to be aware of it. We did try to cover everything in the literature prior to starting our work, however as is often the case, one or two papers might escape ones review. Especially in this field, where papers keep coming out on a daily basis on arxiv. We shall add the approaches you mentioned in our result tables when editing is allowed again. Thank you for informing us of some literature we were previously unaware of.", "https://arxiv.org/pdf/1805.08311.pdf has better Omniglot 5-way results and better Mini-Imagenet 5-way results\nhttps://arxiv.org/pdf/1707.03141.pdf has better Mini-Imagenet 5-way results\nhttps://arxiv.org/pdf/1807.02872.pdf has better Mini-Imagenet 5-way 5-shot results\n" ]
[ -1, -1, -1, -1, -1, 5, 6, 7, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 5, 4, -1, -1, -1, -1 ]
[ "Hkg2spUSRQ", "H1ewhME6TQ", "rJg-0D-927", "rJxOlkMPh7", "Skg14bX92X", "iclr_2019_HJGven05Y7", "iclr_2019_HJGven05Y7", "iclr_2019_HJGven05Y7", "rJlF4Lf-97", "iclr_2019_HJGven05Y7", "rke46mAatQ", "iclr_2019_HJGven05Y7" ]
iclr_2019_HJMC_iA5tm
Learning a SAT Solver from Single-Bit Supervision
We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs.
accepted-poster-papers
The submission proposes a machine learning approach to directly train a prediction system for whether a boolean sentence is satisfiable. The strengths of the paper seem to be largely in proposing an architecture for SAT problems and the analysis of the generalization performance of the resulting classifier on classes of problems not directly seen during training. Although the resulting system cannot be claimed to be a state of the art system, and it does not have a correctness guarantee like DPLL based approaches, the paper is a nice re-introduction of SAT in a machine learning context using deep networks. It may be nice to mention e.g. (W. Ruml. Adaptive Tree Search. PhD thesis, Harvard University, 2002) which applied reinforcement learning techniques to SAT problems. The empirical validation on variable sized problems, etc. is a nice contribution showing interesting generalization properties of the proposed approach. The reviewers were unanimous in their recommendation that the paper be accepted, and the review process attracted a number of additional comments showing the broader interest of the setting.
train
[ "r1lwc2f1C7", "SklcSBbD07", "rkecACeG07", "r1lvKBRCT7", "SJx8VBfCpm", "S1e74C9XTm", "ByeriC5Q6X", "Ske_LQKQTX", "rkeiH0Bmpm", "B1xxGmSmpm", "BJxsg-S767", "ByxJhsEmam", "B1x2lmDK3X", "r1eoySkthX", "H1x_vEau3m", "r1x80rJJTQ", "rJlAYEJyT7", "rylt7MfEhm" ]
[ "author", "author", "public", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "Thank you for the suggestion. I think you may have overlooked the crucial note at the end of S5: \"Note: for the entire rest of the paper, \\emph{NeuroSAT} refers to the specific trained model that has only been trained on $\\SR(\\U(10, 40))$\". We need to rely on a note like this because we use the phrase \"NeuroSAT\" in this way many times. We also include an explicit reminder of this note whenever we draw attention to the role of the training data, as in the beginning on S8: \"NeuroSAT (trained on $\\SR(\\U(10, 40))$) can find satisfying assignments but is not helpful in constructing proofs of unsatisfiability.\" We go on to say that \"We trained our architecture on [the SRC(40, u)] dataset, and we refer to the trained model as \\emph{NeuroUNSAT}.\" To address your concern, I have added another reminder at the beginning of S6, shortly before the sentence you quoted that you found confusing. Do you think it is sufficiently clear now? An alternative approach to preempting this confusion would be to make the dependence on the training distribution explicit in the notation, e.g. by referring to \\mathrm{NeuroSAT}_{\\SR(\\U(10, 40))} and \\mathrm{NeuroSAT}_{\\SRC(40, u)}. This may leave less room for confusion, but I fear it would be rather cumbersome, especially since we only consider two different training distributions in the entire paper. What do you think?", "> What’s the maximum number of classes on the data use for SR(40)?\n\nFor any n >= 2, the number of clauses m in a problem from SR(n) can be arbitrarily large. To see this, note that for any given $M$, there is some probability that we sample the clause (x_1 \\/ x_2) $M$ times in a row, in which case $m$ will be larger than $M$. The variance is not very large though.\n\n> How does the model work with formulas with less than $n$ variables or $m$ clauses?\n> How can you test on a formula with n > 40 or $m$ bigger than the training data of SR(40)?\n\nThe parameters of the model do not depend on n or m in any way. We use d to refer to the dimensionality of the literal and clause embeddings, which is a hyperparameter that does not depend on n or m. As we explain in S3, the parameters of the NeuroSAT architecture are only the following:\n\n1. L_init and C_init: vectors in R^d that for a given problem get duplicated 2n and m times respectively.\n2. L_msg and C_msg: MLPs that map R^d to R^d, and that get applied to the embeddings of each of the 2n literals and m clauses respectively.\n3. L_update and C_update: layer-norm LSTMs whose dimensions also only depend on $d$, that get applied independently for each of the 2n literals and m clauses respectively.\n4. L_vote: an MLP that maps R^d to R, that gets applied independently to the embeddings of each of the 2n literals.\n\nThe input to the model at train and test time is any bipartite adjacency matrix $M$ over any number of literals and clauses. \n\nI added a paragraph at the end of S3 to clarify this point.", "The model described in section 3 has parameters $n$ and $m$ for the number of variables and clauses.\n\n* What’s the maximum number of classes on the data use for SR(40)?\nThat would be $m$, right?\nThe paper says \"over 200 clauses on average”, but it doesn’t say anything about the max.\n\n* How does the model work with formulas with less than $n$ variables or $m$ clauses?\nI understand the adjacency matrix $M$ would have some zero rows and columns.\nWhat’s the impact of that during testing?\n\n* How can you test on a formula with n > 40 or $m$ bigger than the training data of SR(40)?\n", "OK, thanks! This is significantly clearer and I think we’re converging. I would suggest describing this explicitly in the paper. One of the points which was misleading for me is the second paragraph of S6: “NeuroSAT never becomes highly confident that a problem is unsat, and it almost never guesses sat on an unsat problem”. In fact this only holds for the series of experiments on SR(20). \n", "I am afraid that I still do not understand what you are asking, but I will try to address what I think might be a source of confusion. After each round of message passing, each literal casts a _single_ scalar \"vote\". During training, the votes prior to round T are discarded, and then the round-T votes are averaged together and passed to the sigmoid function to estimate the probability that the problem is satisfiable. The network weights are optimized end-to-end to minimize the cross-entropy loss. When we train our architecture on SR(n), we observe empirically that these literal votes behave as we describe in S6, while when we train it on SRC(n, u), we observe empirically that the votes behave as we describe in S8. But in a given trained network, each literal still only casts a single vote at each time step.\n", "First, some context. NeuroSAT solves SAT problems, it doesn't just predict satisfiability. We only report its classification accuracy in S5 to facilitate understanding, and in the rest of the paper we focus on the percent of satisfiable problems for which we can decode a solution. Also, our main motivation has been scientific rather than to build a tool. We wanted to better understand the extent to which neural networks are capable of precise, logical reasoning. As we state in S10, our work has definitively established that neural networks can learn to perform discrete search on their own without the help of hard-coded search procedures, even after only end-to-end training with minimal supervision.\n\nThe DOS2008 paper is orthogonal to our work, but let's still consider it in detail. For a given set of SAT problems, it may be arbitrarily easy to classify satisfiability (e.g. if the _sat_ and _unsat_ problems come from different domains and have different statistical properties); however, high classification accuracy may not imply beating random on subproblems of the problems in the training set, let alone imply high accuracy on (sub)problems from other domains. Such degeneracy is an obvious concern for the \"Crafted\", \"Industrial\", and \"Random\" (not to be confused with \"Random 3-SAT\") categories in DOS2008, and the authors do not provide evidence that the classifiers trained on these categories are robust.\n\nThus, for the rest of this comment we consider only their results in the \"Random 3-SAT\" category, which, although we find the wording on the bottom of page 6 to be confusing, we believe consists only of uniform random 3-SAT instances at the phase transition region that were generated using an unforced filtered method. Even for this category, for which the authors could have easily given precise semantics, they do not mention the size of the problems they used. They say that all 4,772 problems in this category are from SATLib. As of this writing, SATLib has only 3,700 uniform Random-3SAT problems in total, ranging from 20 to 250 variables, so it is not possible for us to deduce how the authors assembled their 4,772 problems or what sizes they were.\n\nThere are three numbers (for each classifier) that DOS2008 provide that we will consider in more detail: the Random 3SAT \"base\", \"all\", and \"+t\" accuracies for the class ALL (meaning unsat and sat combined). For the \"all\" and \"+t\" categories, DOS2008 use extremely sophisticated feature extractors. One set of features requires running two existing stochastic local search algorithms, GSAT and SAPS, multiple times each on the SAT problem. Another set of features involves solving the LP relaxation of an IP representing the SAT problem. A third set of features involves running DPLL on the SAT problem with some budget. Their feature extraction process alone took about 2 seconds on average for each of the random 3-SAT problems (aside: the feature extraction process took over an hour for one of the industrial problems). Depending on the size of the random 3-SAT problems, the solvers they ran as part of this process could have easily solved the problems within the budget and encoded their conclusions in the features themselves. Thus we cannot consider the \"all\" and \"+t\" numbers informative without more information about the sizes of the problems and the budgets for each of the feature extractors.\n\nIt remains to consider the Random 3SAT \"base\" number, which is still extremely high for some of the classifiers (97.2% for decision trees). For \"base\", they use only features 1-33, which are all syntactic properties of the SAT problem. We tried to reproduce these numbers, using sklearn to train a decision-tree classifier (default settings) and an MLP (6 50-node layers and otherwise default settings) to classify satisfiability on two different random 3-SAT distributions using exclusively these 33 features. For the first distribution, we generated 20,000 problems with 20 variables at threshold (~4.62), and for the second, we generated 10,000 problems with 50 variables at the threshold (~4.36). In each case we split the data evenly into train and test, and used Minisat to determine if the problem was satisfiable. Note that the authors only trained on fewer than 5,000 problems in total, so we are making the conditions at least as favorable for the classifier. Under these conditions, we got the following accuracies:\n\nDT, n=20, train: 100%, test: 54%\nDT, n=50, train: 100%, test: 54%\nMLP, n=20, train: 50%, test: 50%\nMLP, n=50, train: 52%, test: 51%\n\nHyperparameter tuning might yield improvements, and of course, we could be making an error in this informal experiment. Nonetheless, especially given how remarkable the unqualified claim is of 97% test-set accuracy on hard random sat with only syntactic features, and how much crucial information is missing from DOS2008, I think the burden is on the authors of DOS2008 to clarify the experimental details and to provide reproducible code.\n", "> 2) As mentioned in the paper, for some cases, it may be possible to decode the satisfying assignments. However, this may require the graph neural network algorithm runs for many iterations. I was wondering what is the average required running time for decoding the satisfying assignments (e.g., how many seconds, ...)? Because if it takes too long time, then I would rather just use an existing off-the-shell SAT solver.\n\nWe explain the entire architecture in detail in the paper, as well as how many iterations we ran it for the different experiments. The number of seconds depends heavily on the hardware, which for neural networks is changing drastically every year. It also depends on how or whether the cost is amortized. As we explain in a comment above, we can solve a very large number of SAT problems simultaneously as a single batch (e.g. on a GPU) without any problem-specific control flow.\n\nAs for the question of whether to use NeuroSAT or an off-the-shelf solver, we stress here as we do in the paper that the trained NeuroSAT discussed in the paper is not remotely competitive with off-the-shelf SAT solvers. We strongly recommend that you just use an existing off-the-shelf SAT solver, at least for the foreseeable future.\n", "> We graphically depict a single iteration in Figure 2, though it is very high-level. Do you have a particular middle- ground in mind, between the figure we have and the equations themselves?\n\nYes, Figure 2 is a bit too high-level and does not provide much information about the transitions. So a bit more detailed figure, showing how equations are handled would be really helpful in understanding the neural model.\n\n> Here is the description we give in the paper on the 2-clustering approach: \"2-cluster $L^{(T)}$ to get cluster centers $\\Delta_1$ and $\\Delta_2$, partition the variables according to the predicate \\( \\| x_i - \\Delta_1 \\|^2 + \\| \\flip{x_i} - \\Delta_2 \\|^2 < \\| x_i - \\Delta_2 \\|^2 + \\| \\flip{x_i} - \\Delta_1 \\|^2 \\), and then try both candidate assignments that result from mapping the partitions to truth values.\" If you clarify what you find confusing or missing from this explanation, we will try to improve the explanation in the paper.\n\nSome additional comments (maybe in the Appendix) about how this approach is implemented would be fine. Namely, how do you find the clusters? What it the algorithm here? And, finally, which assignment do you choose ( we have two candidate assignments, so which is the best)?\n\n> It is easy to generate unlimited data from these distributions. We trained on millions of problems, and tested on hundreds of thousands of them.\n\nSure. But for the sake of clarity, it would be relevant to mention these orders of magnitude.\n\n>> Namely, it seems that in the last layer of each iteration, literals are voting for SAT (red colors) with some confidence (say $\\delta$) and voting for UNSAT (blue colors) with some confidence (say $\\delta’$). Are $\\delta$ and $\\delta’$ correlated in the neural architecture? And, how confidences for UNSAT votes are updated?\n> I am afraid I do not understand what you are asking. Can you please clarify your use of 'correlated' and 'updated' in the last two sentences?\n \nIn Sections 3-7, the learning model is focused on finding satisfying assignments. So, all literals are voting for SAT with some confidence which is susceptible to change over transitions, and finally, an instance is predicted as UNSAT if such confidences are too small (i.e. there is no phase transition). \n\nYet, according to Section 8, the framework can also be applied to core finding, which requires the literals to vote for UNSAT with high confidence (as illustrated in Figure 7). So, a natural question here is: do we have two kinds of votes (i.e. voting for SAT and trying to find a satisfying assignment, AND voting for UNSAT and trying to find a core)? If this is indeed the case, another question is to determine whether such votes are correlated: if one literal is voting for SAT with high-confidence, it will likely vote for UNSAT with low confidence. \n ", "We agree with AR1 that the practical implications are quite hard to know. Following the sentences you quoted, we discuss encouraging signs, and close by saying \"We are cautiously optimistic that a descendent of NeuroSAT will one day lead to improvements to the state-of-the-art.\"", "Thank you for your comments and questions.\n\n> However, the theoretical analysis isn't very sufficient. For instance, why does the change of the dataset from the original SR(n) to SRC(n,u) lead to the change of the behavior of the network from searching for a satisfying assignment indefinitely to detecting the unsatisfiable cores?\n\nFor SRC(n, u), the objective function assigns much lower cost to the parameters that detect the presence of the planted unsat cores than to the parameters that search for satisfying assignments, because unlike the latter, the former allow perfect classification of the dataset in a fixed, small number of steps. Such a simple approach is not an option on SR(n), because the cores are bigger and more diverse.\n\n> For instance, in figure 3, I am not sure whether darker value means larger value or smaller value because the authors only mentioned that white represents zero, blue negative and red positive. Also, in figure 7, I am not sure whether those black grids represent higher positive values or lower negative values.\n\nWe also write in two places that \"for several iterations, almost every literal is voting \\emph{unsat} with low confidence (\\ie light blue)\". We updated the paper to include two more similar parenthetical notes, one for \"_sat_ with high confidence\" and \"dark red\", and one for \"_unsat_ with high confidence\" and \"dark blue\". What you saw as black is just dark blue.\n\n> What's the initialization of the two vectors the authors use for tiling operation? Does the initialization differ for different types of SAT problems?\n\nIt is just the parameters L_init and C_init that are learned by gradient descent at the same time as the other parameters are learned. When a trained NeuroSAT is run on a SAT problem, no matter the size or origin, the same L_init and C_init are used.\n\n> How do the authors decide the number of iterations necessary for solving a particular SAT problem?\n\nSince the network usually converges once it finds a solution, one does not need to try to decode solutions after each round of message passing, and instead can run for a predetermined number of rounds and only check at the end. This is a desirable feature since it makes it easy to solve a very large number of SAT problems simultaneously as a single batch (e.g. on a GPU) without any problem-specific control flow. As for rules of thumb, Figure 5 provides data on how many iterations it took to solve what percentage of problems in SR(n) for a range of n. For the graph problems in S7.2, we simply ran NeuroSAT for (the somewhat arbitrary) 512 iterations on every problem.\n", "Thank you for your comments and questions.\n\n> a brief description of the initial matrices (which encode the literal en clause embeddings) would be nice.\n\nThe initial vectors L_init and C_init are simply parameters of the model, that we learn simultaneously with the other parameters.\n\n> For the sake of clarity, I would suggest to provide a figure for depicting a transition (from iteration t-1 to iteration t) in the architecture.\n\nWe graphically depict a single iteration in Figure 2, though it is very high-level. Do you have a particular middle-ground in mind, between the figure we have and the equations themselves?\n\n> As a minor comment, it would be nice (in Section 2) to define the main parameters $n$, $m$, and $d$ used in the rest of the paper.\n\nWe updated S2 to introduce n and m. We cannot introduce d there since d only makes sense in the context of the model, which is not discussed until S3.\n\n> Concerning the experimental part of the paper, Sections 4 & 5 are well-explained but, in Section 6, the solution decoding method, inspired from PCA is a bit confusing. Specifically, we don’t know how a satisfying assignment is extracted from the last layer, and this should be explained in detail. According to Figure 4 and the comments above, it seems that a clustering method (with two centroids) is advocated, but this is not clear\n\nHere is the description we give in the paper on the 2-clustering approach: \"2-cluster $L^{(T)}$ to get cluster centers $\\Delta_1$ and $\\Delta_2$, partition the variables according to the predicate \\( \\| x_i - \\Delta_1 \\|^2 + \\| \\flip{x_i} - \\Delta_2 \\|^2 < \\| x_i - \\Delta_2 \\|^2 + \\| \\flip{x_i} - \\Delta_1 \\|^2 \\), and then try both candidate assignments that result from mapping the partitions to truth values.\" If you clarify what you find confusing or missing from this explanation, we will try to improve the explanation in the paper.\n\n> In Table 1, the correlation between the accuracy on SAT instances, and the percent of SAT instances solved is not clear. Is the ratio of 70% measured on the CNF instances which have been predicted to be satisfiable? Or, is this ratio measured on the whole set of test instances?\n\nAs the caption says, in that experiment we were able to decode a satisfying assignment for 70% of the satisfiable problems. The satisfiable problems includes the subset of satisfiable problems for which the network incorrectly predicted _unsat_. To a first approximation, the 70% number we report means that we could decode solutions for approximately 96% of the problems correctly predicted to be _sat_; however, the 70% does include a few problems for which the network found a solution but nonetheless incorrectly guessed _unsat_. We expect this case to happen when the network finds the solution towards the very end of message passing, and does not have enough time to flip all the literal votes. \n\nAlso note that the percentage of satisfiable problems solved is the metric we actually care about, whereas we only care about classification accuracy for instrumental reasons.\n\n> Finally, for the results established in Table 1, how many training instances and test instances have been used?\n\nIt is easy to generate unlimited data from these distributions. We trained on millions of problems, and tested on hundreds of thousands of them.\n\n> In Sec 7.1, for SR(200) tasks, was NeuroSAT tested on the same conditions as those for SR(40) tasks? Notably, what is the input dimension $d$ of the embedding space here? (I guess that $d = 128$ is too small for such large instances).\n\nOnce trained, NeuroSAT has learned parameters whose dimensions depend on the hyperparameter $d$. It is not possible to run NeuroSAT with a larger $d$ at test time. For SR(200), we use the exact same trained NeuroSAT model as in Table 1, which was trained only on SR(U(10, 40)) and has $d$ = 128.\n\n> For the 4,888 satisfiable instances generated in Sec. 7.2, which solver have been used to determine the satisfiability of those instances (I guess it is Minisat, but this should be mentioned somewhere).\n\nYes, we used Minisat. We updated the paper to mention this, and also updated S4 to make it clear we use Minisat to generate SR(n) as well.\n\n> But the notion of “confidence” should be explained in more detail in this section, and more generally, in the whole paper.\n\nWe only use the phrase \"confidence\" informally. The semantics of the literal votes is defined by the network architecture.\n\n> Namely, it seems that in the last layer of each iteration, literals are voting for SAT (red colors) with some confidence (say $\\delta$) and voting for UNSAT (blue colors) with some confidence (say $\\delta’$). Are $\\delta$ and $\\delta’$ correlated in the neural architecture? And, how confidences for UNSAT votes are updated?\n\nI am afraid I do not understand what you are asking. Can you please clarify your use of 'correlated' and 'updated' in the last two sentences?\n", "Thank you for your comments.\n\n> One thing that was a little confusing is that why should all the literals turn to SAT (turn red) to prove SAT (as it is shown in figure 3). Is it that the neural network does not realize that it has found a SAT solution with a smaller subset of SAT literals. In other words, is it not capable of taking advantage of the problem structure.\n\nRemember that the network is only trained to make the *mean* vote of the literals large on satisfiable problems and small (i.e. large and negative) on unsatisfiable problems. Thus on satisfiable problems it has a strong incentive to make all the literals vote _sat_ instead of only half of them.", "The paper describes a general neural network architecture for predicting satisfiability. Specifically, the contributions include an encoding for SAT problems, and predicting SAT using a message passing method, where the embeddings for literals and clauses are iteratively changed until convergence.\n\nThe paper seems significant considering that it brings together SAT solving and neural network architectures. The paper is very clearly written and quite precise about its contributions. The analysis especially figures 3,4, and 7 seems to give a nice intuitive ideas as to what the neural network is trying to do. However, one weakness is that the problems are run on a specific type of SAT problem the authors have created. Of course, the authors make it clear that the objective is not really to create a. State-of-the-art solver but rather to understand what a neural network trying to do SAT solving is capable of doing. On this front, I think the paper succeeds in doing this. One thing that was a little confusing is that why should all the literals turn to SAT (turn red) to prove SAT (as it is shown in figure 3). Is it that the neural network does not realize that it has found a SAT solution with a smaller subset of SAT literals. In other words, is it not capable of taking advantage of the problem structure.\n\nIn general though, this seemed to be an interesting paper though its practical implications are quite hard to know either in the SAT community or in the neural network community.", "This paper presents the NeuroSAT architecture, which uses a deep, message passing neural net for predicting the satisfiability of CNF instances. The architecture is also able to predict a satisfiable assignment in the SAT case, and the literals involved in some minimal conflicting set of clauses (i.e. core) in the UNSAT case. The NeuroSAT architecture is based on a vector space embedding of literals and clauses, which exploits (with message passing) some important symmetries of SAT instances (permutation invariance and negation invariance). This architecture is tested on various classes of random SAT instances, involving both unstructured (RS) problems, and structured ones (e.g. graph colorings, vertex covers, dominating sets, etc.).\n\nOverall the paper is well-motivated, and the experimental results are quite convincing. Arguably, the salient characteristic of NeuroSAT is to iteratively refine the confidence of literals voting for the SAT - or UNSAT - output, using a voting scheme on the last iteration of the literal matrix. This is very interesting, and NeuroSAT might be used to help existing solvers in choosing variable orderings for tackling hard instances, or hard queries (e.g. find a core).\n\nOn the other hand, the technical description of the architecture (sec. 3) is perhaps a little vague for having a clear intuition of how the classification task - for SAT instances - is handled in the NeuroSAT architecture. Namely, a brief description of the initial matrices (which encode the literal en clause embeddings) would be nice. Some comments on the role played by the multilayer perceptron units and the normalization units would also be welcome. The two update rules in Page 3 could be explained in more detail. For the sake of clarity, I would suggest to provide a figure for depicting a transition (from iteration t-1 to iteration t) in the architecture. As a minor comment, it would be nice (in Section 2) to define the main parameters $n$, $m$, and $d$ used in the rest of the paper.\n\nConcerning the experimental part of the paper, Sections 4 & 5 are well-explained but, in Section 6, the solution decoding method, inspired from PCA is a bit confusing. Specifically, we don’t know how a satisfying assignment is extracted from the last layer, and this should be explained in detail. According to Figure 4 and the comments above, it seems that a clustering method (with two centroids) is advocated, but this is not clear. In Table 1, the correlation between the accuracy on SAT instances, and the percent of SAT instances solved is not clear. Is the ratio of 70% measured on the CNF instances which have been predicted to be satisfiable? Or, is this ratio measured on the whole set of test instances? Finally, for the results established in Table 1, how many training instances and test instances have been used?\n\nIn Section 7, some important aspects related to experiments, are missing. In Sec 7.1, for SR(200) tasks, was NeuroSAT tested on the same conditions as those for SR(40) tasks? Notably, what is the input dimension $d$ of the embedding space here? (I guess that $d = 128$ is too small for such large instances). Also, how many training and test instances have been used to plot the curves in Figure 5? For the 4,888 satisfiable instances generated in Sec. 7.2, which solver have been used to determine the satisfiability of those instances (I guess it is Minisat, but this should be mentioned somewhere). \n\nIn Section 8, I found interesting the the ability of NeuroSAT in predicting the literals that participate in an UNSAT core. Indeed the problem of finding an UNSAT core in CNF instances is computationally harder than determining the satisfiability of this instance. So, NeuroSAT might be used here to help a solver in finding a core. But the notion of “confidence” should be explained in more detail in this section, and more generally, in the whole paper. Namely, it seems that in the last layer of each iteration, literals are voting for SAT (red colors) with some confidence (say $\\delta$) and voting for UNSAT (blue colors) with some confidence (say $\\delta’$). Are $\\delta$ and $\\delta’$ correlated in the neural architecture? And, how confidences for UNSAT votes are updated?\n\nFinally, I found that the different benchmarks where relevant, but I would also suggest (for future work, or in the appendix) to additionally perform experiments on the well-known random 3-SAT instances ($k$ is fixed to 3). Here, it is well-known that a phase transition (on the instances, not the solver/learner) occurs at 4.26 for the clause/variable ratio. A plot displaying the performance of NeuroSAT (accuracy in predicting the label of the instance) versus the clause/variable ratio would be very helpful in assessing the robustness of NeuroSAT on the so-called “hard” instances (which are close to 4.26). By extension, there have been a lot of recent work in generating “pseudo-industrial” random SAT instances, which incorporate some structure (e.g. communities) in order to mimic real-world structured SAT instances. To this point, it would be interesting to analyze the performance of NeuroSAT on such pseudo-industrial instances.\n", "This paper trains a neural network to solve the satisfiability problems. Based on the message passing neural network, it presents NeuroSAT and trains it as a classifier to predict satisfiability under a single bit of supervision. After training, NeuroSAT can solve problems that are larger and more difficult than it ever saw during training. Furthermore, the authors present a way to decode the solutions from the network's activations. Besides, for unsatisfiable problems, the paper also presents NeuroUNSAT, which learns to detect the contradictions in the form of UNSAT cores.\n\nRelevance: this paper is likely to be of interest to a large proportion of the community for several reasons. Firstly, satisfiability problems arise from a variety of domains. This paper starts with a new angle to solve the SAT problem. Secondly, it uses neural networks in the SAT problem and establishes that neural networks can learn to perform a discrete search. Thirdly, the system used in this paper may also help improve existing SAT solvers.\n\nSignificance: I think the results are significant. For the decoding satisfying assignments section, the two-dimensional PCA embeddings are very clear. And the NeuroSAT's success rate for more significant problems and different problems has shown it's generalization ability. Finally, the sequences of literal votes in NeuroUNSAT have proved its ability to detect unsatisfied cores.\n\nNovelty: NeuroSAT’s approach is novel. Based on message passing neural networks, it trains a neural network to learn to solve the SAT problem. \n\nSoundness: This paper is technically sound. \n\nEvaluation: The experimental section is comprehensive. There are a variety of graphs showing the performance and ability of your architecture. However, the theoretical analysis isn't very sufficient. For instance, why does the change of the dataset from the original SR(n) to SRC(n,u) lead to the change of the behavior of the network from searching for a satisfying assignment indefinitely to detecting the unsatisfiable cores?\n\nClarity: As a whole, the paper is clear. The definition of the problem, the model structure, the data generation, the training procedure, and the evaluation are all well organized. However, there is still a few points requiring more explanation. For instance, in figure 3, I am not sure whether darker value means larger value or smaller value because the authors only mentioned that white represents zero, blue negative and red positive. Also, in figure 7, I am not sure whether those black grids represent higher positive values or lower negative values.\n\nA few questions:\n\nWhat's the initialization of the two vectors the authors use for tiling operation? Does the initialization differ for different types of SAT problems?\n\nHow do the authors decide the number of iterations necessary for solving a particular SAT problem?\n\n", "The paper (with admirable honesty) itself claims to have little to no impact on modern SAT solving. To quote, \"As we stressed early on, as an end-to-end SAT solver the trained NeuroSAT system discussed in this paper is still vastly less reliable than the state-of-the-art. We concede that we see no obvious path to beating existing SAT solvers. \"", "I am curious what the authors have to say regarding this comment. ", "I have two points:\n1) Devlin and O’Sullivan (2008) examined the performance of a host of simple ML techniques for classifying satisfiability. Experimental results showed that Random Forest achieved very good performance (90+% accuracy for difficult large industrial SAT instances as well as for random 3-SAT and random k-SAT instances sourced from Satlib). However, the proposed deep learning based method achieves only 85% accuracy on randomly generated instances. This makes me question the significance of this work. The authors say that the data generation heuristic mentioned in the paper is for helping the neural network generalize better. I would be more convinced if the authors demonstrate the generalizability by evaluating the performance of NeuroSAT on real industrial instances. In conclusion, my main point is that after reading the work of Devlin and O’Sullivan (2008), I don't feel this work is important or significant.\n\n2) As mentioned in the paper, for some cases, it may be possible to decode the satisfying assignments. However, this may require the graph neural network algorithm runs for many iterations. I was wondering what is the average required running time for decoding the satisfying assignments (e.g., how many seconds, ...)? Because if it takes too long time, then I would rather just use an existing off-the-shell SAT solver.\n\nReferences:\n[1] David Devlin and Barry O’Sullivan. B.: Satisfiability as a classification problem. In Proc. of the 19th Irish Conf. on\nArtificial Intelligence and Cognitive Science, 2008." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, -1, -1, -1 ]
[ "r1lvKBRCT7", "rkecACeG07", "iclr_2019_HJMC_iA5tm", "SJx8VBfCpm", "Ske_LQKQTX", "rylt7MfEhm", "rylt7MfEhm", "BJxsg-S767", "r1x80rJJTQ", "H1x_vEau3m", "r1eoySkthX", "B1x2lmDK3X", "iclr_2019_HJMC_iA5tm", "iclr_2019_HJMC_iA5tm", "iclr_2019_HJMC_iA5tm", "B1x2lmDK3X", "rylt7MfEhm", "iclr_2019_HJMC_iA5tm" ]
iclr_2019_HJMCcjAcYX
Learning Representations of Sets through Optimized Permutations
Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering.
accepted-poster-papers
The paper proposes an architecture to learn over sets, by proposing a way to have permutations differentiable end-to-end, hence learnable by gradient descent. Reviewers pointed out to the computational limitation (quadratic in the size of the set just to consider pairwise interactions, and cubic overall). One reviewer (with low confidence) though the approach was not novel but didn't appreciate the integration of learning-to-permute with a differentiable setting, so I decided to down-weight their score. Overall, I found the paper borderline but would propose to accept it if possible.
train
[ "SJeO5obc3X", "HyxNhFQNRQ", "BJlqr2mmR7", "B1xeI-0-R7", "r1ecut0WRX", "rJeFrla-AX", "rkxAsPCgAX", "HyeIZzxbAX", "B1x7WnW22Q", "B1xmilebCX", "rygmYbjlAQ", "Syl2O-tkR7", "SJlGnzY3am", "HJludzt2T7", "SygvBfKnpX", "H1eeEy2jaQ", "SkgX7AqYa7", "SJlJE61t6Q", "Skemjcuvam", "SylBN0Pv6m", "SJlzqsLLpQ", "HkePjnBXaQ", "SklXCYfXpm", "rkx_m_MQaQ", "r1eTkiAbT7", "HkxvCeAW6m", "r1xwjcdgp7", "H1e6T_sI3Q" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "Update: From the perspective of a \"broader ML\" audience, I cannot recommend acceptance of this paper. The paper does not provide even a clear and concrete problem statement due to which it is difficult for me to appreciate the results. This is the only paper out of all ICLR2019 papers that I have reviewed / read which has such an issue. Of course for the conference, the area chair / program chairs can choose how to weigh the acceptance decisions between interest to the broader ML audience and the audience in the area of the paper. \n\n----------------------------------------------------------------------------------------------------------------------------------\n\n This paper addresses the problem that often features are obtained as a set, whereas certain orders of these features are known to allow for easier learning. With this motivation the goal of this paper is to learn a permutation of the features. This paper makes the following three main contributions:\n1. The idea of using pairwise comparison costs instead of position-based costs\n2. The methodological crux of how to go from the pairwise comparison costs to the permutation (that is, solving Eqn. (2) using Eqn. (1) )\n3. An empirical evaluation\n\nI like the idea and the empirical evaluations are promising. However, I have a major concern about the second contribution on the method. There is a massive amount of literature on this very problem and a number of algorithms are proposed in the literature. This literature takes various forms including rank aggregation and most popularly the (weighted) minimum feedback arc set problem. The submitted paper is oblivious to this enormous literature both in the related work section as well as the empirical evaluations. I have listed below a few papers pertaining to various versions of the problem (this list is by no means exhaustive). With this issue, I cannot give a positive evaluation of this submitted paper since it is not clear whether the paper is just re-solving a solved problem. That said, I am happy to reconsider if the related work and the empirical evaluations are augmented with comparisons to the past literature on the methodological crux of the submitted paper (e.g., why off-the-shelf use of previously proposed algorithms may or may not suffice here.)\n\n\nUnweighted feedback arc set:\n\nA fast and effective heuristic for the feedback arc set problem, Eades et al.\n\nEfficient Computation of Feedback Arc Set at Web-Scale, Simpson et al.\n\nHow to rank with few errors, Kenyon-Mathieu et al.\n\nAggregating Inconsistent Information: Ranking and Clustering, Ailon et al.\n\n\nHardness results:\n\nThe Minimum Feedback Arc Set Problem is NP-hard for Tournaments, Charbit et al.\n\n\nWeighted feedback arc set:\n\nA branch-and-bound algorithm to solve the linear ordering problem for weighted tournaments, Charon et al.\n\nExact and heuristic algorithms for the weighted feedback arc set problem: A special case of the skew‐symmetric quadratic assignment problem, Flood\n\nApproximating Minimum Feedback Sets and Multicuts in Directed Graphs, Even et al.\n\n\nRandom inputs:\n\nNoisy sorting without resampling, Braverman et al.\n\nStochastically transitive models for pairwise comparisons: Statistical and computational issues, Shah et al.\n\nOn estimation in tournaments and graphs under monotonicity constraints, Chatterjee et al. \n\n\nSurvey (slightly dated):\n\nAn updated survey on the linear ordering problem for weighted or unweighted tournaments, Charon et al.\n\n\nConvex relaxation of permutation matrices:\n\nOn convex relaxation of graph isomorphism, Afalo et al.\n\nFacets of the linear ordering polytope, Grotschel\n\n", "- We have added a reference to the concurrent ICLR submission \"Janossy Pooling\" https://openreview.net/forum?id=BJluy2RcFm\nThis work can be considered complementary to ours: they focus on averaging higher-order interactions between set elements (which generalises Zaheer et al.'s and Santoro et al.'s models we already mentioned in our paper) and simply training with randomly sampled permutations, while our work focuses on how to flexibly learn a canonical ordering that the RNN processing the permuted set \"likes\", rather than giving it a random ordering each time.", "We recognise, and thank, you for the time that you have spent on this review. Looking back through the discussion, we believe we might now have a clearer idea where you are having difficulty with understanding our approach. Our formal problem statement above focuses on the specific contribution of the paper: proposing a new module that can be inserted within a deep network to explicitly allow for permutation invariance of the inputs whilst producing a single fixed output list. Such an approach is crucial for any learning task where we believe that set representations (or equivalently permutation invariance) is important (as demonstrated through the many experiments in the paper).\n\n\nWe will end with a revised formal problem statement that covers the entire problem context. $X$ and $f$ used here is unrelated to the $f$ in the paper. $g$ matches the $g$ in our problem statement above.\n\nAssume that we have some learning task that involves learning a complex mapping between inputs X and outputs y. We choose to solve this problem with a deep network formulated as two parts: a deep feature extractor $f(X, \\theta_f)$ that produces a set of feature vectors and a classifier $h(f(X, \\theta_f), \\theta_h)$ that produces $\\hat{y}$, which are estimates of $y$. $\\theta_f$ and $\\theta_h$ are the parameters of the neural networks $f$ and $h$ respectively. We emphasise that the outputs of $f$ should be treated as a mathematical set, even though they would be encoded as a list of feature vectors.\n\n$h$ should properly treat its input as a set, but it is difficult to structure and learn the parameters of $h$ in such a way that the outputs of it will be the same for any permutation of the feature vector list produced by $f$. Our proposed change to the problem is to add a learnable module -- $g(\\cdot, \\theta_g)$ -- between $f$ and $h$ that transforms the list of feature vectors that $f$ produces into a canonical representation that is the same for any permutation of that list.\n\nOur complete learning problem can thus be expressed as one of finding parameters $\\theta_f, \\theta_g, \\theta_h$ such that the empirical risk with a given loss function between true $y$s and estimated $\\hat{y}$s, where $\\hat{y} = h(g(f(X, \\theta_f), \\theta_g), \\theta_h)$, is minimised.\n\nTo achieve this, we use gradient descent to train the parameters $\\theta_f, \\theta_g, \\theta_h$. This requires that $f, g, h$ are differentiable. To satisfy the requirement of $g$ producing a canonical representation regardless of the input ordering, $g(f(X, \\theta_f), \\theta_g)$ must equal $g(P f(X, \\theta_f), \\theta_g)$ where P is any permutation matrix and the list of feature vectors (which is to be treated as a set) that f produces are placed in the rows of a matrix.", "We believe that you are looking for the Empirical Risk Minimisation [0] or Structural Risk Minimisation framework that we mentioned in one of our earlier comments. Sections 1 to 4, 7, and 8 in that seminal paper should hopefully clear up the formal setting of ERM and SRM for you. This is the usual setting for supervised learning problems, which includes what we consider in our paper.\n\nThrough a priori knowledge of the problem (learning representations of sets), we add a structure with specific properties (differentiability, permutation invariance) into the neural network.\n\n\n[0]: V. Vapnik. Principles of Risk Minimization for Learning Theory. In NIPS, 1992. http://papers.nips.cc/paper/506-principles-of-risk-minimization-for-learning-theory.pdf\n\nEdit: Apologies if this comment perhaps seemed a bit condescending, but we do not know your background and without you being more precise about what your imagined problem statement contains, we do not know what to answer.", "I'm sorry that there seems to be a communication gap. I am well aware of ERM/SRM etc. However I don't see your problem description specifying what training data is available etc. for (supervised) learning. That is why I asked for a formal problem statement but the one provided seems to be incomplete. \n\nAnyways, this is my last comment since I have spent a highly disproportionate amount of time in trying to get a formal problem statement. I think this paper may be suitable for people who are in this specific ballpark of area of research and I will leave it up to the other reviewers, but for someone like me who is somewhat farther away, I feel it is unfortunate that the problem statement is also not specified clearly. If the results are indeed good (and I hope they are) then it would be very useful for the authors if the paper was made more accessible to a broader ML audience.\n\n", "I am looking for a proper problem statement which still has not been specified.\n\nThanks for clarifying the g vs h errors in your previous comment.\n\nIf the only requirements are \n\"1. invariant to permutation of rows of X, i.e. $g(X) = g(PX)$ for all permutation matrices $P$,\n2. differentiable almost everywhere.\"\nthen setting g as a constant function satisfies both constraints. So there is more to the problem than just these two requirements. I am looking for precisely what this problem entails.\n\n\n", "What exactly do you understand as a proper formal problem statement? What is insufficient about the problem statement that we gave?\n\nWe meant $g$ where we used $h$, it was a typo.\n\nDifferentiability almost everywhere and permutation-invariance are both formally well-defined properties of functions. We did not specify what \"permutes X\" means exactly to not tie it to our method of matrix-multiplying PX, where P is a doubly-stochastic matrix in our case. This was in response to your previous complaint of tying down the description with our specific method of solving it.", "- We realised that we forgot to mention that the 30% increase in computation time comes not from processing with just 1 PO-U module, but 8 PO-U modules since there are 8 attention glimpses in the BAN model and each is processed separately. The paper now includes this information. The individual modules are thus faster than we initially suggested.\n- Figures of network architectures for the 4 different tasks are now included (Figure 9, 10, 11). (Reviewer 2)\n- Small wording change in section 2 paragraph 1 to more explicitly state that differentiability is necessary. (Reviewer 3)\n- In the title, we changed the spelling from \"Optimised\" to \"Optimized\".\n\nWe have also updated https://github.com/iclr2019-anon123456/perm-optim with our code for the VQA experiment (all experiments should be reproducible now) and the code for the visualisations that we added in the last revision.", "The authors introduce a method to learn to permute sets end-to-end. They define the cost of a permutation as the sum of pairwise costs induced by the permutation, where the pairwise costs are learned. Permutations are made differentiable by relaxing them to doubly stochastic matrices which are approximated with the Sinkhorn operator. In the forward pass of the algorithm, a good permutation (ie one with low cost) is obtained with a few steps of gradient descent (the forward pass itself contains an optimization procedure). This permutation is then either used directly as the output of the algorithm or is used to permute the original inputs and feed the permuted sequence to another module (such as an RNN or a CNN). The method can easily be adapted to other structures such as lattices by considering row-wise and column-wise pairwise relations.\n\nThe proposed method is benchmarked on 4 tasks:\n1. Sorting numbers, where they obtain very strong generalization results.\n2. Re-assembling image mosaics, on which they obtain encouraging results.\n3. Image classification through image mosaics.\n4. Visual Question Answering where the permuted inputs are fed to an LSTM whose final latent state is fed back into the baseline model (a bilinear attention network). Doing so improves over feeding the inputs to an LSTM without learning the order.for which the output is the permutation itself and classification from image mosaics and visual question answering which require to learn an implicit permutation.\n\nThe method is most similar to Learning Latent Permutations with Gumbel-Sinkhorn Networks (Mena et al) but considers pairwise relations when producing the permutation. This can have important advantages (such as taking local relations into account, as shown by the strong sorting results) but also drawbacks (inability to differentiate inputs with similar content), but in any case this represents a good step towards exploring with different cost functions.\n\nThe method can be quite unpractical (cubic time complexity in set cardinality, optimization in forward pass, having to preprocess the set into a sequence for another module can be resource expensive). \nExperimental results on toy tasks (tasks 1, 2 and 3) are encouraging. The approach improves over a relatively strong baseline (task 4) although it isn't clear that it would still hold true when controlling for number of parameters and compute.\n\nI have a few comments about the presentation (for which I would be willing to change my score to a 6):\n- When possible, please use the numbers reported by Mena et al and consider reporting error (instead of accuracy) as they do to ease comparison. The results that you report using their method are quite worse than what they report, so I think it would be fair to include both your reimplementation and the initial results in the table.\n- It would be interested to have some insights on what function f is learned (for the sorting task and re-assembling image mosaics for example).\n- Clarity would be improved with figures representing which neural networks are used at what part of the process.\n\n\n###########################################\nUpdated review:\n\nThe authors have greatly improved presentation and have addressed concerns about the increase in parameters and computation time. I have changed my score to a 6.", "All the experiments were run on GTX 1080 Ti GPUs. For the sorting experiment and the experiments on MNIST and CIFAR10, we used 1 GPU. For the ImageNet versions, we used 2 GPUs. For VQA, we used 4 GPUs (BAN authors used 4 Titan XP GPUs).\n\nParameter counts:\nBAN baseline: 85,968,934\nBAN with PO-U and LSTM: 86,062,128\n", "In your comment, what is g and what is h? It says the goal is to design g but the comment uses notation h twice, where h is undefined. \n\nMoreover, is g restricted to be a permutation (\"problem is to find a function $g: R^{N \\times M} \\to R^{N \\times M}$ that permutes\")? \n\nCan you please provide a proper formal proper problem statement? ", "The new sections in the appendix provide interesting insights about what is learned and the revisions in the presentation make it easier to compare results with Mena et al.\n\nMy concern about the practicality of the approach still remains, although the computation time increase of 30% is reasonable. I'm also curious about which hardware was used and most importantly the increase in number of parameters.\n\n", "In the updated paper, we justify the alternative gradient update (Appendix D) and we fixed the mistakes you noticed (Eqn 5 and comment about hardness). We also explain in the accuracy table of experiment 2 (table now moved to the appendix) why the accuracy of all models is bad for 4x4 tile numbers.", "Have we explained why our methodology is not just reinventing the wheel sufficiently? We have updated the related works section with a citation to the survey you referenced and explain why that line of work is not relevant to what we are doing.", "We addressed 2 of your 3 comments with our updated paper and will update the paper again soon with the network architecture figures. Let us know if there are other types of qualitative analysis that might be of interest.", "We have updated the paper with the following changes:\n\n- Scalability was one of the major concerns of Reviewer 1 and 2. We now specify in section 5 that for the experiments on the real-world dataset VQA using object proposals, this is only an increase of 30% in computation time with set sizes up to 100. It's also useful to keep in mind that the cubic comes from multiplying two NxN matrices together, which is an operation that GPUs are very fast at. Mena et al's simple linear model is O(N^2), so our O(N^3) that properly models pairwise comparisons is not too far off from this.\nTo put this into perspective, we can compare our O(N^3) to the processing of a length N sequence with an RNN that has a hidden state size of H. The N hidden state updates, which is a matrix-vector product each, has time complexity O(N * H^2). In comparison to our O(N^3), if H > N then our complexity is better, and N > 100 with H > 1000 is not uncommon in tasks such as language modeling.\n\n- We added Appendix C wherein we analyse the learned comparison functions qualitatively (Reviewer 2). This contains some figures that help understand what it has learned, which may be of interest to everyone.\n\n- We added Appendix D wherein we justify using the alternative gradient update in the internal optimisation. While this is not a proof of, for example, convergence speed, this should be enough to give a good idea of why it is useful in our case. (Reviewer 1)\n\n- For experiment 2, we swapped the table of accuracy (previously in main body) with the table of mean squared errors (previously in appendix) because the accuracy metric has some flaws that can lead to misunderstandings. (Reviewer 1)\n- We fixed the typo in Eqn (5) (Reviewer 1).\n- We fixed the inaccurate statement about computational complexity of the problem (Reviewer 1).\n- We now clarify that the benefit on VQA of including our model holds when controlling for computation time and model size in section 4.4. (Reviewer 2)\n- For experiment 2, we now include Mena et al's results directly in our tables for ease of comparison, with the caveat that we knowingly avoid an improvement to results that they get on MNIST through arbitrarily upscaling the images. (Reviewer 2)\n- We now clarify why existing literature on minimum feedback arc sets is not relevant in section 3. (Reviewer 3)\n\n\nWe are now working on the figures of the network architectures used for the different tasks and hope to update the paper with them very soon. (Reviewer 2)\n\nWe thank the reviewers again for the good comments and suggestions, which certainly helped us in improving the paper.", "1. You are of course correct and we fully concede that we were sloppy with making those claims. What we meant is that we only managed to find mappings of the problem to known NP-hard problems (which is clearly not a proof that it is NP-hard itself, but perhaps evidence that it is non-trivial). The computational complexity of the optimisation problem is somewhat tangential anyway, since the much more important aspect is the differentiability of the optimisation, which allows the model to be learned end-to-end. This is difficult without the relaxations on P since there would be no meaningful gradient for it. We will fix the one comment in the main text about the complexity of the problem, thank you for pointing this out.\n\n2. As we discuss in the paper, this is a problem for the other datasets too. If the number of tiles is the same as the number of pixels, it is obvious that this is an issue with any dataset that can have two of the same pixels anywhere in the image. As we point out in the text, we believe that this manifests itself through the difference of performance between PO-U and LinAssign between CIFAR10 and ImageNet: PO-U performs closer to LinAssign on CIFAR10 where this is a bigger issue (smaller tile sizes, more likely to have similar tiles) than on ImageNet (64x64 images, so bigger tile sizes and less likely to have similar tiles), where PO-U performance is much closer to PO-LA than LinAssign. With VQA this is less likely an issue: since each element of the set is a 2048d vector, unless two elements are describing the exact same object -- in which case this isn't a problem -- it is very unlikely for two elements to be very similar to each other.\n\n3. For sorting, accuracy monotonically gets worse than 100% starting from T=10, T=25 gets about 99%, T=50 gets about 0.96%, T=100 gets about 94.5% and beyond T=200 gets 94%. We do not observe any oscillations outside of measurement errors. Thus, we doubt that it could go up again with even larger T considering how smooth this change with increasing T is.\nFor the image mosaic experiments, it appears to be similarly stable. Again these results should get a fair bit better if the models are trained on higher T too, rather than being evaluated for much longer than it was trained at. We evaluate with T=100 instead of T=100k now since these datasets take longer to run than the simple number sorting. We checked that results don't change much when increasing T over 100 and did not observe significant oscillations.\n\nMean squared errors and accuracies, before (eval with T=4 with results from paper) -> after (eval with T=100)\n---------------------\nMNIST 2x2: MSE 0.00 -> 0.00, accuracy 100% -> 99.9%\nMNIST 3x3: MSE 0.02 -> 0.07, accuracy 65.9% -> 55.6%\nMNIST 4x4: MSE 0.46 -> 0.78, accuracy 0.2% -> 0%\nMNIST 5x5: MSE 0.45 -> 0.96, accuracy 0% -> 0%\nCIFAR10 2x2: MSE 0.11 -> 0.17, accuracy 87.3% -> 85.6%\nCIFAR10 3x3: MSE 0.44 -> 0.60, accuracy 34.4% -> 35.0%\nCIFAR10 4x4: MSE 1.23 -> 1.39, accuracy 0% -> 0%\nCIFAR10 5x5: MSE 1.26 -> 1.48, accuracy 0% -> 0%\n\nFor the classification setting from reconstructed mosaics, the change in classification accuracy is as follows:\n\nClassification accuracies, before (eval with T=4 with results from paper) -> after (eval with T=100)\n---------------------\nMNIST 2x2: 99.4% -> 98.9%\nMNIST 3x3: 98.7% -> 92.6%\nMNIST 4x4: 67.9% -> 22.9%\nMNIST 5x5: 69.2% -> 20.7%\nCIFAR10 2x2: 70.8% -> 67.7%\nCIFAR10 3x3: 41.6% -> 33.3%\nCIFAR10 4x4: 33.3% -> 27.9%\nCIFAR10 5x5: 32.3% -> 22.9%\n\nMNIST on 4x4 does terribly, presumably due to the magnified issues with resolving blank tiles. CIFAR10 does not degrade as much as MNIST with increasing number of tiles.", "Thanks for the response. Just some small questions:\n\n1. For your response to my first question (on proving the optimization problem is difficult), you can not say this problem is NP-hard by writing it in a Quadratic Assignment Problem (QAP) form and using the fact that QAP is NP-hard to show your problem is NP-hard. Instead, you need to show that, every instance of the QAP problem can be transformed into one instance of your optimization problem efficiently (i.e. the opposite way).\n\n2. As you mentioned in your response to my second question, if we split the images into many tiles, then many of the tiles will be very similar. Is this a problem for only MNIST or for all of the datasets?\n\n3. You mentioned in your response to my fourth question that, the experiment that sorts 10 numbers with T=100000 steps has the accuracy go down from 100% to ~94%. How does this accuracy change during this process? Does the accuracy go up and down during the 100000 steps or the accuracy just goes down once at some point? Will the accuracy go back to 100% if T is larger? Also, what about the other experiments? Will the performances be stable if T is larger than 4?", "Once we update the paper with Mena's results added into our tables, comparison of the results should be easy regardless of whether we report accuracy or (1-accuracy). Unless you feel strongly about this, we prefer to use accuracy, precisely due to this possible confusion between (1-accuracy) error and mean squared error.\nThe instance you mention where Mena et al > ours > our reimplementation is wrt MSE only on ImageNet 3x3 and wrt accuracy on ImageNet 3x3 and MNIST 4x4. This will become clearer with the update.\n\nA caveat about their results to be aware of, which we mention in Appendix E.2, is that they upscaled their MNIST images by a factor of 2. When we tried this, it lead to better results for all models (with ordering between models preserved) in our testing too. However, this seemed too arbitrary to us and we decided to not do this. We will mention this detail about their results in the main body because we include their results now.", "We are given an input that is a set of cardinality $N$ of feature vectors with dimensionality $M$, represented as a matrix $X \\in \\R^{N \\times M}$ with the feature vectors as rows in some arbitrary order. The problem is to find a function $g: R^{N \\times M} \\to R^{N \\times M}$ that permutes the rows of $X$ into a sequence of feature vectors $g(X) = Y$, with the constraints that $h$ must be:\n\n1. invariant to permutation of rows of X, i.e. $g(X) = g(PX)$ for all permutation matrices $P$,\n2. differentiable almost everywhere.\n\n\nThis is essentially the problem statement that we give in the first paragraph of section 2. Note that we are doing something much more general compared to the papers you cited in your review:\n1. there is no ground-truth of what the cost/weight between set elements is,\n2. there is no ground-truth notion of what a \"good\" output sequence $Y$ is.\n\nThese aspects can be learned in the usual Empirical Risk Minimisation setting. The differentiability constraint is necessary as we are interested in using deep neural networks trained with gradient descent to provide the input set $X$ and process $h(X)$ further.", "Thank you for your response. As I requested in my previous comment, \"could you please provide a complete and self-contained problem statement?\" Please provide a mathematically rigorous __problem__ statement, and not the proposed method of solving it. Thank you.", "Just to clarify about the error, I meant (1-accuracy), not MSE and I agree MSE could be harder to compare.\nYour reimplementation wasn't significantly worse but if I recall correctly there were a few instances for which the following was true: Mena et al > Yours > Your reimplementation of Mena.", "Thank you for the review.\n\nScalability to large sets is indeed an issue with our model. As we discuss in section 5, its intended use is more with small sets of complex elements (e.g. objects in images) rather than large sets of simple elements (e.g. point clouds), with our VQA experiment being an instance of the former.\nFor large sets there are some optimisations that can be made. For example, if there are many pairs of elements where we can guess that they will not affect each other's local ordering much (e.g. points in point clouds that are far away from each other), we can make the comparisons sparse by only comparing points that are reasonably close to each other, which just involves a pre-processing step. We also mention in the Discussion section that a divide-and-conquer strategy (perhaps merge-sort-like) could work, reducing the comparisons down to O(n log n), though it might assume transitivity of the cost function.\n\n1. Eqn (2), without any relaxations, is a standard Quadratic Assignment Problem (see Appendix C to turn (2) into a more standard formulation). These problems are known to be NP-hard, and even epsilon-approximability is NP-hard [ https://dl.acm.org/citation.cfm?id=321975 ]. For the quadratic programming formulation (when constraints on P are relaxed), we already cite (Pardalos & Vavasis, 1991) in our paper for NP-hardness.\n\n2. There is a bit of a misunderstanding in interpreting the 4x4 and 5x5 results because we did not explain the metric sufficiently. Especially for MNIST, many of the tiles end up completely blank or very similar when the image is split into many small tiles. This means that there are many equally-valid solutions where blank tiles are assigned to different positions. However, the accuracy metric does not account for these multiple possible solutions properly (only one of these is considered to be the ground truth), so all the blank tiles must happen to be assigned to the blank spots in the ground-truth order for the accuracy to be 1.\n\nWe will improve this by swapping accuracy (Table 1) with mean squared error (Appendix E Table 4) for this experiment. Until we have updated the paper, you can see in Table 4 that while there is still a worsening of error as we increase the number of tiles, the change is less abrupt and the models do decently on MNIST for higher number of tiles.\n\nAs to ideas for improving results in the setting of many tiles for image mosaics: in the grid case, it is possible to not only have row-wise and column-wise permutations, but also over diagonals, which should help in constraining what permutations are considered good by the model. It is also possible to modify the row-wise comparison to not act on each row individually, but consider assignments to other rows simultaneously (vice versa for columns).\nIn some initial work we had some success on image mosaics with an alternative cost function for (1) that only considers correct ordering between direct neighbours, but it struggled even with small instances of the sorting task so we did not pursue this further. It may be possible to fix this issue, perhaps by doing a convex combination of the cost function in the paper and this modified cost function, which should give better results.\n\n3. As theoretical analysis of these complex learned systems is rather difficult (it is a bit like asking for theoretical analysis of when, for example in the context of sequence modeling, CNNs are better than RNNs), we instead point to the results in section 4: PO-LA is suitable for when LinAssign is decent, which is the case when absolute positioning of set elements is useful (since LinAssign only directly models absolute positioning). The PO part is then used to refine this. PO-U is suitable for when relative position between set elements becomes more important, resulting in LinAssign only learning things that are not useful and thus being a detractor. If the input set can have variable size (e.g. in VQA), it is not possible to use PO-LA, only PO-U.\n\n4. We took a net that was trained to sort 10 numbers with T=4 and evaluated it with T=100 000. The accuracy degraded from 100% accuracy to ~94% accuracy. We also took a net trained with T=20 and evaluated it with T=100 000, which resulted in 100% accuracy this time. It appears that if the permutations seen in training are sufficiently converged already -- which is the case for T=20 -- it converges stably at evaluation time when run for signficantly longer too.\n\n5. We will add this as an appendix to the paper. By looking at the gradients before and after the Sinkhorn, it becomes clear that gradients vanish when trying to differentiate through the Sinkhorn the further away from the uniform initialisation one gets. That means that it is difficult to learn Ps that are close to proper permutation matrices (all 0 and 1 entries) if we do not use the alternative gradient.\n\nTypo in equation (5): Good catch! We will fix this in the revision of our paper.", "Thank you for the review.\n\nIn general, we intend our model to be useful for relatively small sets of complex objects like in VQA, rather than large sets of simple objects where the cubic time complexity indeed becomes a big problem.\nAbout your concern whether the results hold up when controlling for parameters and computation: the time increase by using our model compared to the baseline is about 30% (4400 seconds per epoch instead of 3400 seconds). This is a similar increase in computation time as the change from BAN-8 to BAN-12 (increasing number of attention glimpses to 12) in their paper (Kim et al., 2018), Table 1. Their difference is within one standard deviation (0.04% increase, stdev of 0.11%), so simply increasing the BAN model size alone is clearly running into diminishing returns already. Our model does not add a significant number of parameters compared to changing BAN-8 to BAN-12. Our model also results in qualitatively different improvements: general improvements in VQA typically result in a roughly even improvements in all the categories, whereas our model improves on number questions significantly more than other categories. We will make this clearer in the revision.\n\n- We will swap the table of errors (Appendix E Table 4) with the table of accuracies (Table 1) in the main body. We initially decided on reporting accuracy in the main body because MSE may not be directly comparable: we do not know whether their pixel values were scaled to have unit variance (our choice) or to be between 0--1, the latter of which would make their errors seem lower than ours for the same reconstruction (in personal communication, they said that they did not normalise the data, but our results suggest that they did, since our reproduced MSEs on MNIST closely match theirs). As Reviewer 1 seemed to have a slight misunderstanding with accuracy too, we agree that comparing MSE in the main body is likely clearer.\n\nContrary to what you say, our reproductions on MNIST are fairly close to what they report (our reproduced MSEs are roughly even, accuracies are slightly worse) and only on ImageNet are our reproduced results of their model worse. To be clear, the relevant row in their results to compare our accuracy to is \"Prop. any wrong\" (reconstruction is correct only if all tiles are correct, this is what we use), not \"Prop. wrong\" (reconstructions of the tiles being individually correct). As per your suggestion, we will include their results in our tables to make the appropriate comparison easier.\n\n- We will perform some analysis of the learned f and include it as an appendix. For number sorting, it is enough for it to learn f(x_i, x_j) = x_i, so F(x_i, x_j) = x_i - x_j, which is a sensible comparison function. In initial analysis it appears to learn a scaled and shifted version of that. We are currently looking into what it learns for image mosaics.\n\n- We will add some figures to make the network architectures for the different tasks clearer.\n", "It is indeed computed separately in the forward pass of the model. But during training, backpropagation adjusts the weights that the cost function depends on, based on the gradients that are backpropagated through the algorithm.\n\nWe have a model that is given some input data, where each individual data sample is a set, and some target output. During the forward pass (first half of one training step, or one full inference step), a neural net turns the set into the matrix of pairwise costs and our permutation algorithm is run on it to produce a permutation matrix. The permutation matrix is applied to the set to produce a sequence, which can now be processed further by another neural net to produce the predicted output. During the second half of this training step, gradients of a loss function are backpropagated through the network to minimise how different the predicted and target outputs are. To backpropagate through to the weights that determine what the cost function is, we have to backprop through the algorithm first. At the start of training a network, all weights are random so the permutation produced is nonsense, since the neural net that determines the cost function has not learned what permutations are good for the particular task is yet. Through training, the neural net that computes the cost function ($f$ in the paper) receives gradients (by backpropagating the gradients of the loss function through the permutation algorithm) to learn how to assign costs to inputs appropriately.", "Okay so then it is not clear to me what the paper is doing. My interpretation of the writing is that the cost function is computed __separately__ from optimising the permutation. For example the beginning of Section 2.2 says \"Now that we can compute the total cost of a permutation, we want to optimise this cost with respect to a permutation.\" Is that not true? If so, then could you please provide a complete and self-contained problem statement? Thank you.", "Thank you for the review. The key difference you missed that separates our work from the papers you cite is that our method is differentiable. In our problem set-up, we are not given the pairwise costs; they have to be learned. In order for these costs to be learnable with gradient descent, we have to be able to differentiate through the algorithm. This is possible with our method, but not possible with traditional literature on feedback arc sets. Experimental comparisons to the papers you list are thus not meaningful, since the costs that these algorithm operate on have to be learned first. Does this sufficiently clarify for you why our methodology is not reinventing the wheel?\n\nWe already cite the particular convex relaxation of permutations that we use (Fogel et al., 2013; Adams & Zemel, 2011) and the NP-hardness of the problem (Pardalos & Vavasis, 1991).\n\nThough we mention this matter of differentiability several times throughout the paper, we will add a sentence in the Related Works section to make this distinction with the work on feedback arc sets even clearer.", "This paper proposed an interesting idea of learning representations of sets by permutation optimizations. Through learning a permutation of the elements of a set, the proposed algorithm can learn a permutation-invariant representation of that set. To deal with the underlying difficult combinatorial optimization problem, the authors proposed to relax the optimization constraints and instead optimize over the set of doubly-stochastic matrices with reparameterization using the Sinkhorn operator. The cost function of this optimization is related to a pairwise ordering cost, which compares the order for each pair of the elements.\n\nThe idea of using pairwise comparison information to learn permutations is interesting. The total cost function utilizes the comparison information and optimization over this cost function can lead to a permutation-invariant representation of the set. The idea of using the Sinkhorn operator to reparameterize the doubly-stochastic matrices makes the optimization objective differentiable. Also, the experiment results compared with some baseline algorithms showed the success of the proposed methods in many different tasks.\n\nMy major concern of the proposed method is on whether this method can be applied to large sets. Since the algorithm compares all pairs of elements in the set, we need O(N^2) comparisons for a set of size N and hence the proposed method might be slow if N is large. Is it possible to improve the efficiency for large sets?\n\nQuestions and Suggestions:\n\n1. Since the authors wants to approximately solve the objective function in Equation (2), it is better if we can see a proof showing why this optimization problem is difficult.\n\n2. For the experiment in Section 4.2, it seems that all methods (including the proposed methods and the baseline methods) are not performing well if the images are split to at least 4 * 4 equal-size tiles. I understand that currently the authors applied their method to the case of grid permutation by simply adding all cost functions of all rows and columns. Is it possible to extend the proposed method to the grid case in another way so that the results under this setting is better? \n\n3. It will be better if the authors can propose some more insights (probably with some theoretical analysis) when can the PO-U method performs better and when can the PO-LA method performs better.\n\n4. The authors mentioned that, the proposed method can get good permutations even for only T=4 steps. What if we continue running the algorithm? Will the permutation converges stably?\n\n5. The authors proposed to update the permutation matrix parameters in an alternative way (Equation (7)) and mentioned that this update works significantly better in the experiments. It will be great if the authors can have a theoretical analysis on why this is true since P and \\tilde P can be quite different from each other for an arbitrary \\tilde P matrix.\n\n\nMinor comment:\n\nI think there is a typo in Equation (5). The entry \\tilde P_{pq} is related to not only the entry P_{pq}, but also the other entries of the matrix P. Hence, I think Equation (5) should be modified as a matrix multiplication." ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_HJMCcjAcYX", "iclr_2019_HJMCcjAcYX", "r1ecut0WRX", "rJeFrla-AX", "B1xeI-0-R7", "rkxAsPCgAX", "rygmYbjlAQ", "iclr_2019_HJMCcjAcYX", "iclr_2019_HJMCcjAcYX", "Syl2O-tkR7", "SylBN0Pv6m", "H1eeEy2jaQ", "H1e6T_sI3Q", "SJeO5obc3X", "B1x7WnW22Q", "iclr_2019_HJMCcjAcYX", "SJlJE61t6Q", "SklXCYfXpm", "HkePjnBXaQ", "SJlzqsLLpQ", "r1eTkiAbT7", "rkx_m_MQaQ", "H1e6T_sI3Q", "B1x7WnW22Q", "HkxvCeAW6m", "r1xwjcdgp7", "SJeO5obc3X", "iclr_2019_HJMCcjAcYX" ]
iclr_2019_HJMHpjC9Ym
Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition
In this paper, we propose a novel Convolutional Neural Network (CNN) architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. This is achieved by using a multi-branch network, which has different computational complexity at different branches with different resolutions. Through frequent merging of features from branches at distinct scales, our model obtains multi-scale features while using less computation. The proposed approach demonstrates improvement of model efficiency and performance on both object recognition and speech recognition tasks, using popular architectures including ResNet, ResNeXt and SEResNeXt. For object recognition, our approach reduces computation by 1/3 while improving accuracy significantly over 1% point than the baselines, and the computational savings can be higher up to 1/2 without compromising the accuracy. Our model also surpasses state-of-the-art CNN acceleration approaches by a large margin in terms of accuracy and FLOPs. On the task of speech recognition, our proposed multi-scale CNNs save 30% FLOPs with slightly better word error rates, showing good generalization across domains.
accepted-poster-papers
This paper propose a novel CNN architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. reviewers generally arrived at a consensus on accept.
val
[ "Syll-AfcTQ", "BJxO_af9pQ", "Hye0Uaf96X", "B1xmXpMqpm", "BygYyRtc3Q", "BJx2-Tcv3m", "B1egeFYOom" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We updated the pdf to address the comments from the reviewers. (the revised parts are highlighted in blue.)", "We thank the reviewer for the positive comments on our approach. We have included in Table 11 (Page 18) the results of bL-ResNet-50 and bL-ResNet-101 with alpha and beta both set to be 1. Not surprisingly, both models achieve the best accuracy, but they also become most costly in computation and are parameter heavy.\n", "We thank the reviewer for the constructive comments. \n\n- Transfer capability of bLNet:\nWe used bLNet as a backbone network for feature extraction in the Faster RCNN + FPN detector.\nThe detection results on PASCAL VOC and COCO datasets are included in Table 10 in Appendix A6.\nOur bLNet achieves comparable or better accuracy than the baseline detectors while reducing FLOPs by about 1.5 times.\nPlease refer to Table 10 in Appendix A6 for more detail.\n\n- Memory requirements of bLNet:\nWe benchmarked the GPU memory consumption in runtime at both the training and test phases for all the models evaluated in Fig. 3.\nThe results are shown in Fig. 5 in Appendix A7. The batch size was set to 8, which is the largest number allowed for NASNet on a P100 GPU card (16 GiB memory). The image size for any model in this benchmark experiment is the same as that used in the experiment reported in Fig. 3. For bLNet, the input image size is 224x224 in training and 256x256 in test.\n\nFrom Fig. 5, we can see that bLNet is the most memory-efficient for training among all the approaches. \nIn test, bL-ResNeXt consumes more memory than inception-resnet-v2 and inception-v4 at the same accuracy, \nbut bL-SEResNeXt outperforms all the approaches. Note that NASNet and PNASNet are not memory friendly.\nThis is largely because they are trained on a larger image size (331x331) and these models are composed of many layers.\n", "We thank the reviewer for the positive comments on our approach. We have revised the manuscript to clarify our contributions in the introduction. For the parameters alpha and beta in bLNet, although they could be tuned for each layer, we fixed them (alpha=2 and beta=4) in all our experiments except in the ablation study. We found that this universal setting in general leads to good tradeoffs between accuracy and computation cost among all the models consistently. In the future, we are interested in exploring reinforcement learning to search for optimal alpha and beta to achieve a better tradeoff.\n", "This paper presents a novel multi-scale architecture that achieves a better trade-off speed/accuracy than most of the previous models. The main idea is to decompose a convolution block into multiple resolutions and trade computation for resolution, i.e. low computation for high resolution representations and higher computation for low resolution representations. In this way the low resolution can focus on having more layers and channels, but coarsely, while the high resolution can keep all the image details, but with a smaller representation. The branches (normally two) are merged at the end of each block with linear combination at high resolution. Results for image classification on ImageNet with different network architectures and for speech recognition on Switchboard show the accuracy and speed of the proposed model.\n\nPros:\n- The idea makes sense and it seems GPU friendly in the sense that the FLOPs reduction can be easily converted in a real speed-up\n- Results show that the joint use of two resolution can provide better accuracy and lower computational cost, which is normally quite difficult to obtain\n- The paper is well written and experiments are well presented.\n- The appendix shows many interesting additional experiments\n\nCons:\n- The improvement in performance and speed is not exceptional, but steady on all models.\n- Alpha and beta seem to be two hyper-parameters that need to be tuned for each layer.\n\nOverall evaluation:\nGlobally the paper seems well presented, with an interesting idea and many thorough experiments that show the validity of the approach. In my opinion this paper deserves to be published.\n\n\nAdditional comments:\n- - In the introduction (top of pag. 2) and in the contributions, the advantages of this approach are explained in a different manner that can be confusing. More precisely in the introduction the authors say that bL-Net yeald 2x computational saving with better accuracy. In the contributions they say that the savings in computation can be up to 1/2 with no loss in accuracy. \n", "The authors propose a new CNN architecture and show results on object and speech recognition. In particular, they propose a multi-scale CNN module that processes feature maps at various scales. They show compelling results on IN and a reduction of compute complexity\n\nPros:\n(+) The paper is well written\n(+) The method is elegant and reproducible\n(+) Results are compelling and experimentation is thorough\nCons:\n(-) Transfer to other visual tasks, beyond IN, is missing\n(-) Memory requirements are not mentioned, besides FLOPs, speed and parameters\n\nOverall, the proposed approach is elegant and clear. The impact of the multi-scale module is evident, in terms of FLOPs and performance. While their approach performs a little worse than NASNet, both in terms of FLOP efficiency and top1-error, it is simpler and easier to train. I'd like for the authors to also discuss memory requirements for training and testing the network. \n\nFinally, various papers have appeared over the recent years showing improvements over baselines on ImageNet. However, most of these papers are not impactful, because they do not show any impact to other visual tasks, such as detection. On the contrary, methods that do transfer get adopted very fast. I would be much more convinced of this approach, if the authors showed similar performance gains (both in terms of complexity and metrics) for COCO detection. \n", "The big-little module is an extension of the multi-scale module. Different scales takes different complexities: higher complexity for low-scale, and lower complexity for high scale. Two schemes of merging two branches are also discussed, and the linear combination is empirically better. \n\nAs expected, the results are better than ResNets, ResNexts, SEResNexts. I do not have comments except ablation study is needed to show the results for more choices of alpha, beta, e.g., alpha =1, beta =1." ]
[ -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2019_HJMHpjC9Ym", "B1egeFYOom", "BJx2-Tcv3m", "BygYyRtc3Q", "iclr_2019_HJMHpjC9Ym", "iclr_2019_HJMHpjC9Ym", "iclr_2019_HJMHpjC9Ym" ]
iclr_2019_HJe62s09tX
Unsupervised Hyper-alignment for Multilingual Word Embeddings
We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This paper extends this line of work to the problem of aligning multiple languages to a common space. A solution is to independently map all languages to a pivot language. Unfortunately, this degrades the quality of indirect word translation. We thus propose a novel formulation that ensures composable mappings, leading to better alignments. We evaluate our method by jointly aligning word vectors in eleven languages, showing consistent improvement with indirect mappings while maintaining competitive performance on direct word translation.
accepted-poster-papers
This paper provides a simple and intuitive method for learning multilingual word embeddings that makes it possible to softly encourage the model to align the spaces of non-English language pairs. The results are better than learning just pairwise embeddings with English. The main remaining concern (in my mind) after the author response is that the method is less accurate empirically than Chen and Cardie (2018). I think however that given that these two works are largely contemporaneous, the methods are appreciably different, and the proposed method also has advantages with respect to speed, that the paper here is still a reasonably candidate for acceptance at ICLR. However, I would like to request that in the final version the authors feature Chen and Cardie (2018) more prominently in the introduction and discuss the theoretical and empirical differences between the two methods. This will make sure that readers get the full picture of the two works and understand their relative differences and advantages/disadvantages.
train
[ "ryllvPPg0X", "HketImInnX", "Bke2ofhw37", "SyesiPQrhQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers: could you please take a look at the author response? I think it is comprehensive, and could very well address some of the concerns expressed in the original reviews. I'd appreciate any additional feedback or discussion, which would help write the final review of the paper.", "The authors present a method for unsupervised alignment of word across multiple languages. In particular, they extend an existing unsupervised bilingual alignment to the case of multiple languages by adding constraints to the optimization problem. The main aim is to ensure that the embeddings can now be composed and the performance (alignment quality) does not degrade across multiple compositions.\n\nStrengths\n- Very clearly written\n- A nice overview of existing methods and correct positioning of the author's contributions in the context of these works\n- A good experimental setup involving multiple languages\n\nWeaknesses\n- I am not sure how to interpret the results in Table 2 and Table 3 (see questions below).\n\nQuestions\n- On page 7 you have mentioned that \"this setting is unfair to the MST baseline, since ....\" Can you please elaborate on this? I am not sure I understand this correctly.\n\n- Regarding results in Table 2 and 3: It seems that there is a trade-off while adding constraints which results in poor bilingual translation quality. I am not sure is this is acceptable. I understand that your goal is to do indirect translation but does that mean we should ignore direct translation ?\n\n- In Table 3 can you report both W-Proc and W-Proc* results ? Is it possible that the GW-initialization helps bilingual translation as the performance of W-Proc* is clearly better than W-Proc in Table 2. However, could it be the case that this somehow affects the performance in the indirect translation case? IMO, this is worth confirming.\n\n- In Table 3, you are reporting average accuracies across and within families. I would like to see the numbers for all language pairs independently. This is important because when you consider the average it is quite likely that for some language pair the numbers were much higher which tilts the average in favor of some approach. Also looking at the individual numbers will help us get some insights into the behavior across language pairs.\n\n- In the motivation (Figure 1) it was mentioned that compositions can be done (and are often desirable) along longer paths (En-Fr-Ru-It). However, in the final experiments the composition is only along a triplet (X-En-Y). Is that correct or did I misinterpret the results? If so, can you report the results when the number of compositions increases?\n\n\n\n ", "This paper is concerned with the idea of inducing multilingual word embeddings (i.e., word vector spaces where words from more than two languages are represented) in an unsupervised way using a mapping-based approach. The main novelty of the work is a method, inspired by recent work of Nakashole and Flauger, and building on the unsupervised bilingual framework of Grave et al., which aims at bypassing the straightforward idea of independently mapping N-1 vector spaces to the N-th pivot space by adding constraints to ensure that the learned mappings can be composed (btw., it is not clear from the abstract what this means exactly).\n\nIn summary, this is an interesting paper, but my impression is that it needs more work to distinguish itself from prior work and stress the contribution more clearly. \n \nAlthough 11 languages are used in evaluation, the authors still limit the evaluation only to (arguably) very similar languages (all languages are Indo-European and there are no outliers, distant languages or languages from other families at all, not even the usual suspects like Finnish and Hungarian). Given the observed instability of GAN-based unsupervised bilingual embedding learning, dissected in Sogaard et al.'s paper (ACL 2018) and also touched upon in the work of Artetxe et al. (ACL 2018), one of the critical questions for this work should also be: is the proposed method stable? What are the (in)stability criteria? When does the method fail and can it lead to sub-optimal solutions? What is the decrease in performance when moving to a more distant language like Finnish, Hungarian, or Turkish? Is the method more robust than GAN-based models? All this has to be at least discussed in the paper. \n\nAnother question is: do we really want to go 'fully unsupervised' given that even a light and cheap source of supervision (e.g., shared numerals, cognates) can already result in more robust solutions? See the work of Artetxe et al. (ACL 2017, ACL 2018), Vulic and Korhonen (ACL 2016) or Sogaard et al. (ACL 2018) for some analyses on how the amount of bilingual supervision can yield more (or less) robust models? Is the proposed framework also applicable in weakly-supervised settings? Can such settings with weak supervision guarantee increased robustness (and maybe even better performance)? I have to be convinced more strongly: why do we need fully unsupervised multilingual models, especially when evaluation is conducted only with resource-rich languages?\n\nAnother straightforward question is: can the proposed framework handle cases where there exists supervision for some language pairs while other pairs lack supervision? How would the proposed framework adapt to such scenarios? This might be an interesting point to discuss further in Section 5.\n\nStyle and terminology: it is not immediately clear what is meant by (triplet) constraints (which is one of the central terms in the whole work). It is also not immediately clear what is meant by composed mappings, hyper-alignment (before Section 4), etc. There is also some confusion regarding the term alignment as it can define mappings between monolingual word embedding spaces as well as word-level links/alignments. Perhaps, using mapping instead of alignment might make the description more clear. In either case, I suggest to clearly define the key concepts for the paper. Also, the paper would contribute immensely from some running examples illustrating the main ideas (and maybe an illustrative figure similar to the ones presented in, e.g., Conneau et al.'s work or Lample et al.'s work). The paper concerns word translation and cross-lingual word embeddings, and there isn't a single example that serves to clarify the main intuition and lead the reader through the paper. The paper is perhaps too much focused on the technical execution of the idea to my own liking, forgetting to motivate the bigger picture.\n\nOther: the part on \"Language tree\" prior to \"Conclusion\" is not useful at all and does not contribute to the overall discussion. This could be safely removed and the space in the paper should be used to additional comparisons with more baselines (see above for some baselines).\n\nThe authors mention that their approach is \"relatively hard to scale\" only in their conclusion, while algorithmic complexity remains one of the key questions related to this work. I would like to see some quantitative (time) measurements related to the scaling problem, and a more thorough explanation why the method is hard to scale. The complexity and non-scalability of the method was one of my main concerns while reading the paper and I am puzzled to see some remarks on this aspect only at the very end of the paper. Going back to algorithmic complexity, I think that this is a very important aspect of the method to discuss explicitly. The authors should provide, e.g., O-notation complexity for the three variant models from Figure 2 and help the reader understand pros and cons of each design also when it comes to their design complexity. Is the only reason to move from the star model to the HUG model computational complexity? This argument has to be stressed more strongly in the paper.\n\nTwo very relevant papers have not be cited nor compared against. The work of Artetxe et al. (ACL 2018) is an unsupervised bilingual word embedding model similar to the MUSE model of Conneau et al. (ICLR 2018) which seems more robust when applied on distant languages. Again, going back to my previous comment, I would like to see how well HUG fares in such more challenging settings. Further, a recent work of Chen and Cardie (EMNLP 2018) is a multilingual extension of the bilingual GAN-based model of Conneau et al. Given that the main goal of this work and Chen and Cardie's work is the same: obtaining multilingual word embeddings, I wonder how the two approHowaches compare to each other. Another, more general comment concerns the actual evaluation task: as prior work, it seems that the authors optimise and evaluate their embeddings solely on the (intrinsic) word translation task, but if the main goal of this research is to boost downstream tasks in low-resource languages, I would expect additional evaluation tasks beyond word translation to make the paper more complete and convincing.\n\nThe method relies on a wide spectrum of hyper-parameters. How are these hyper-parameters set? How sensitive is the method to different hparams configurations? For instance, why is the Gromov-Wasserstein approach applied only to the first 2k vectors? How are the learning rate and the batch size determined?\n\nMinor:\nWhat is W in line 5 of Algorithm 1?\nGiven the large number of symbols used in the paper, maybe a table of symbols put somewhere at the beginning of the paper would make the paper easier and more pleasant to read.\nI would also compare the work to another relevant supervised baseline: the work from Smith et al. (ICLR 2017). This comparison might further strengthen the main claim of the paper that indirect translations can also be found without degrading performance in multilingual embedding spaces.", "This is a work regarding the alignment of word embedding for multiple languages.Though there are existing works similar to this one, most of them are only considering a pair of two languages, resulting in the composition issue mentioned in this work. The authors proposed a way of using a regularization term to reduce such degraded accuracy and demonstrate the validity of the proposed algorithm via experiments. I find the work to be interesting and well written. Several points that I want to bring up:\n\n1. The language tree at the end of section 5 is very interesting. Does it change if the initialization/parameter is different?\n\n2. The matrix P in (1) is simply a standard permutation matrix. I think the definitions are redundant.\n\n3. The experiment results are expected since the algorithms are designed for better composition quality. An additional experiment, e.g. classification of instances in multiple languages, could further help demonstrate the strength of the proposed technic.\n\n4. How to choose the regularization parameter \\mu and what's the effect of \\mu?\n\n5. Some written issues like the notation of orthogonal matrix set, both \\mathcal{O} and \\mathbb{O} are used." ]
[ -1, 5, 6, 7 ]
[ -1, 3, 4, 3 ]
[ "iclr_2019_HJe62s09tX", "iclr_2019_HJe62s09tX", "iclr_2019_HJe62s09tX", "iclr_2019_HJe62s09tX" ]
iclr_2019_HJeRkh05Km
Visual Semantic Navigation using Scene Priors
How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves the performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects.
accepted-poster-papers
The authors propose an approach for visual navigation that leverages a semantic knowledge graph to ground and inform the policy of an RL agent. The agent uses a graphnet to learn relationships and support the navigation. The empirical protocol is sound and uses best practices, and the authors have added additional experiments during the revision period, in response to the reviewers' requests. However, there were some significant problems with the submission - there were no comparisons to other semantic navigation methods, the approach is somewhat convoluted and will not survive the test of time, and the authors did not conclusively show the value of their approach. The reviewers uniformly support the publication of this paper, but with a low confidence.
train
[ "BkgGikf51N", "B1g19HJmhX", "S1glgBmmJN", "Hyxib3AJRm", "Skldw5RkAX", "Syxtu20kC7", "Sklpi3C7h7", "BkgRhp8Dj7", "r1liIPRCjQ", "Hkx_7tZ0jX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "I upgraded my score from 6 to 7.\n\nThe authors have responded satisfactorily to my questions. I still find the method a bit convoluted and do not think that it will stand the test of time. However, the paper is competently done and is a fine addition to the literature. I support acceptance.", "This paper explores the use of semantic priors for semantic navigation. The semantic priors are derived from language datasets (in the form of word embeddings, which assign similar feature vectors to related words) and from visual datasets (Visual Genome, which represents relationships between objects that co-occur in scenes).\n\nThe general ideas are reasonable. The experimental protocol is sound and uses recent best practices. The results are fine.\n\nI'm a bit puzzled by the way the GCN is used. Figure 2 implies that the GCN doesn't actually use information from the current image. I.e., the GCN input doesn't change as the agent navigates the scene. (In Figure 2, the GCN path appears similar to the Word Embedding path. The Word Embedding path doesn't update when the agent moves, so the reader can infer that the GCN path doesn't update either.) But then I don't quite understand how the GCN incorporates information from the current scene.\n\nFigure 4 implies that the GCN is re-evaluated when the agent moves and the input image changes. But how is information from the image fed into the GCN? The text implies that an ImageNet classification model is run on the image. But why image classification and not object detection? It seems that what one would really want is to understand what objects are in the scene. And how is output of the image classification network supplied to the GCN? Is the target object type used as well? Overall it's not clear to me exactly how information from the current image is supplied to the GCN, why this mechanism is right, and what the GCN is expected to do. (I do understand how GCNs work, just not how exactly they are used here and why this precise usage is right for this application.) I hope the authors can clarify in the response.\n", "Thanks for the clarification regarding potential baselines, and for the additional results. It looks like in most cases the method's performance degrades gracefully as you test on different scene categories than what was used for training. I guess I was really wondering how the method would fare on this task *compared to baselines* but I didn't explicitly state this request so that's ok. I retain my positive rating.", "We appreciate the insightful feedback. We have addressed the questions and comments below. \n\n- Figure 2 implies that the GCN doesn't actually use information from the current image.\n\nAnswer: We have modified Figure 2 in the revision for clarification. The GCN actually takes the information from the current image by computing the 1000-class ImageNet classification score on it. This 1000-d score is forwarded to an FC layer which outputs a 512-d image embedding. For each node, this 512-d image embedding is concatenated with a word embedding (512-d), which creates a 1024-d feature for input. Note that the word embeddings are different corresponding to the semantic class for each node. \n\n- Why image classification and not object detection? \n\nAnswer: We agree that a detector might be better. That said, we did try to apply Faster R-CNN trained with COCO. However, the Faster RCNN detector generated a lot of false detections and localizations. The benefit of using ImageNet classifier is that it gives relatively more robust estimation as it does not need to handle localization. Moreover, prediction of more diverse classes allows more complex relationship reasoning. \n\n\n- What the GCN is expected to do?\n\nAnswer: The input for each node in the GCN changes after every action based on the new observed image. The input for each node is the joint embedding of the current observation (image embedding) and the semantic class of the node (word embedding). By propagating the information through the edges of the knowledge graph, the information for each node is updated by its related nodes. Intuitively, in this way, the information of the existence of a “coffee machine” can be propagated to highlight the potential existence of a “mug”. We concatenate the response of all nodes to form a feature for policy estimation. This helps us generalize to navigation to “mug”, although we do not optimize directly for it during training.\n", "Thank you for the valuable comments and clarifying questions. Please find the responses to your questions and comments below. \n\n- Have you considered adding explicit relations between entities? will it increase the navigation performance? if not why?\n\nAnswer: Yes, we used the explicit relations but it did not improve the results (we briefly mention that towards the end of Section 5.2). That is probably due to overfitting since we have few examples for each type of relation. \n\n- It is unclear how many objects are used to construct a KG from an image. For example, are top-k objects identified by ResNet used to construct a KG?\n\nAnswer: The number of nodes is fixed and it is not image dependent. We are considering 53 objects of THOR so our graph has 53 nodes (Section 5.1). \n\n- The agents receive a positive reward when it is close to the target (within a certain number of steps). Does this mean that the agent gets a positive reward on every step near the target while it's not in the final state? \n\nAnswer: In the scenario that we use the termination action, the agent should say “stop” when it observes the target object to get the reward. Otherwise, it will not receive the reward. In the scenario that we do not have the termination action, the agent might receive the positive reward at multiple points since we reward the agent if the target is within the cone of visibility and within 1 meter from the agent. Once it receives the reward the episode is finished.\n\n- Can you find all object from AI2-THOR in the categories of ImageNet and of Visual Genome? is there any information loss while constructing a KG from the classification result? \n\nAnswer: About half of the object categories are not in ImageNet. However, all of them appear in Visual Genome. \n\n- What is the average number of nodes of a KG? and is there any correlation between the size of KG and the result?\n\nAnswer: We use 53 nodes. In Table 3 of the original submission (Table 4 of the revised version), we show how the performance degrades as we remove nodes and relations from the graph. \n\n- Why are the performances of the models is unstable with Bedroom dataset (in terms of variance)? \n\nAnswer: One run got stuck in a bad local minima and that caused a large variance. We have multiple runs with different random initialization. \n\n- It is unclear what is the corresponding image feature. If two objects are identified in the same frame, do input features of these two objects share the same image features from Resnet?\n\nAnswer: Yes, that is right. We use image-level features (as opposed to object-level features). Some of our object categories are novel and unseen so we cannot train supervised detectors for them. \n", "Thank you for the insightful feedback. Please find the answers to the questions and comments below. \n\n- Comparison with Anderson 2018, Zhu 2017 and Gupta 2017\n\nAnswer: Zhu et al., 2017 use the picture of the target as input, while we consider scenarios with unseen objects. Also, they train and test in the same scenes, and we use unseen scenes and targets for evaluation. Gupta et al., 2017 train their model using imitation learning (DAGGER), which means they have the optimal action. We use only the scalar reward for completing the task. Regarding Anderson et al., 2018, the agent receives natural language instructions, while we use a single word specifying the target category. Due to these discrepancies, we cannot make an apples to apples comparison. \n\n- I am also curious whether the proposed work generalizes across scene type categories.\n\nAnswer: Thanks for suggesting this experiment. We have added a new paragraph in the result section to describe this experiment. We also added a new table to the revised version (Table 3), where we show the results for training on one category and testing on another category. \n", "This paper tackles the problem of navigating scenes to find objects which are potentially not included in the training phase. To find an unseen object from a scene, the proposed model incorporates an external knowledge graph as an augmented input of the actor-critic model. To construct a knowledge graph, entities in a scene are identified by ResNet and then the link structure between entities are extracted from VIsual Genome dataset. Through the ablation study, it is shown that using the knowledge graph helps to track and identify unseen objects during training.\n\n- The original knowledge graph (KG) has relation labels (such as next to, on in figure 3) between different objects, however, GCN does not take into account the relations between objects. Only co-occurrence patterns will be encoded into the KG constructed from an image. There are more complex graph convolutional models modelling relations between nodes such as [1]. Have you considered adding explicit relations between entities? will it increase the navigation performance? if not why?\n- It is unclear how many objects are used to construct a KG from an image. For example, are top-k objects identified by ResNet used to construct a KG?\n- Description of the reward is a bit unclear as well, especially when the model is trained without stop action. From the text, the agents receive a positive reward when it is close to the target (within a certain number of steps). Does this mean that the agent gets a positive reward on every step near the target while it's not in the final state?\n- This might be a trivial question, but I couldn't find it from the text. Can you find all object from AI2-THOR in the categories of ImageNet and of Visual Genome? is there any information loss while constructing a KG from the classification result? What is the average number of nodes of a KG? and is there any correlation between the size of KG and the result?\n- Why are the performances of the models is unstable with Bedroom dataset (in terms of variance)? \n- The input feature of GCN is a combination of word feature and image feature. It is clear that there is a corresponding word embedding for each of the identified objects, but it is unclear what is the corresponding image feature. If two objects are identified in the same frame, do input features of these two objects share the same image features from Resnet?\n\n[1] Schlichtkrull, Michael, et al. \"Modeling relational data with graph convolutional networks.\" European Semantic Web Conference. Springer, Cham, 2018. \n", "This work proposes to use semantic knowledge about the relationships and functionality of different objects, to help in navigation tasks, in both familiar and unfamiliar situations. The paper is very well written and it is clear what the authors did. The approach seems sound, and while it combines two existing approaches (actor-critic reinforcement learning for navigation, and belief propagation using graph convolution networks) is sufficiently novel to be of interest to at least some members of the community. The experimental evaluation is good, and the proposed method outperforms Mnih 2016 by a significant margin, especially in the more interesting settings. A good ablation study is provided. \n\nMy main concern is that there seems to be a larger pool of work in semantic navigation than what the evaluation includes. Anderson 2018, Zhu 2017 and Gupta 2017 seem relevant. While none of these use knowledge graphs, some of these show they outperform Mnih 2016 so would be stronger baselines. \n\nI am also curious whether the proposed work generalizes across scene type categories (e.g. if it learns on kitchens but it tested on living rooms). This would be an experiment in the spirit of unknown object/scene but even more challenging. ", "The graph network is optimized end-to-end along with the actor-critic model. There is no backpropagation to the visual network and semantic network. The fully connected layers after them are trained along with the actor-critic model though.", "Hi, \nI am not very clear about the training pipeline, In Fig.2, does all the embedding sub-networks(the Visual network, the Semantic network, and the Graph network) are simultaneously end-to-end optimized with Actor-critic model ? Or the embedding sub-networks are frozen when training the policy network (Actor-cirtic model). Can you do me a help? Thank you!\n" ]
[ -1, 7, -1, -1, -1, -1, 7, 7, -1, -1 ]
[ -1, 3, -1, -1, -1, -1, 1, 4, -1, -1 ]
[ "B1g19HJmhX", "iclr_2019_HJeRkh05Km", "Syxtu20kC7", "B1g19HJmhX", "Sklpi3C7h7", "BkgRhp8Dj7", "iclr_2019_HJeRkh05Km", "iclr_2019_HJeRkh05Km", "Hkx_7tZ0jX", "iclr_2019_HJeRkh05Km" ]
iclr_2019_HJeu43ActQ
NOODL: Provable Online Dictionary Learning and Sparse Coding
We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients. Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex. This was a major challenge until recently, when provable algorithms for dictionary learning were proposed. Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients. Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients. This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest. To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately. Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations. Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques.
accepted-poster-papers
Alternating minimization is surprisingly effective for low-rank matrix factorization and dictionary learning problems. Better theoretical characterization of these methods is well motivated. This paper fills up a gap by providing simultaneous guarantees for support recovery as well as coefficient estimates for linearly convergence to the true factors, in the online learning setting. The reviewers are largely in agreement that the paper is well written and makes a valuable contribution. The authors are advised to address some of the review comments around relationship to prior work highlighting novelties.
train
[ "ByeGb44y0X", "rJlnpeEk0m", "SJxXh0Q1A7", "SJgzc8XTnX", "B1xtSy6t2Q", "B1x81eSE2m" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are grateful to the reviewer for the comments. In this revision, we have corrected the minor typos, added additional comparisons, and added a proof map for easier navigation of the results. Specific comments are addressed below. \n\n1. Regarding exact recovery guarantees — NOODL converges geometrically to the true factors. Therefore, the error drops exponentially with iterations t. In other words, as t —> infinity A_i —> A^*_i for i in [1,m] and x_j —> x^*_j for j in [1,m], where x_j is in R^m. We have added this clarification in Section 1.1.\n\n2. On tuning parameters — There are primarily three tuning parameters, namely eta_x (step-size for the IHT step), tau (threshold for the IHT step), and eta_A (step-size for the dictionary update step.) Our main result prescribes the theoretical values of these as shown in assumptions A.5 and A.6. Here, eta_x = Omega_tilde(k/sqrt(n)), tau = Omega_tilde(kˆ2/n), and eta_A = Theta(m/k). We have updated A.6. to include the order of these parameters.\n\nThe specific choices of these parameters, like other similar problems, depend on some a priori unknown parameters (e.g. the sparsity k, and the incoherence mu) which makes some level of tuning unavoidable. This is true for Arora '15 and Mairal '09, as well, where tuning is required for the choice of step-size for dictionary update, and for choice of regularization parameter and the step-size for coefficient estimation via FISTA. Note that, in our experiments we fix the step-size for FISTA as 1/L, where L is the estimate of the Lipschitz constant (since A is not known exactly).\n\nAlternately, since NOODL involves gradient-based updates for the coefficients and the dictionary, tuning (the step-sizes and the threshold) is relatively straightforward in practice, since it is based on a gradient descent strategy. In fact, to compile the experiments presented in this paper, we fixed step-size, eta_x, and threshold, tau, and tuned the step-size parameter eta_A only (Theta(m/k)). The choices of eta_A are 30 for k = 10,20 and eta_A = 15 for k =50,100, as shown in Fig.2., eta_A mostly effects the convergence rate as long as it is chosen in Theta(m/k). \n\nAlso, as shown in Table 4 (Appendix E), the tuning process for l1-based algorithms (i.e. FISTA) takes more time, since one needs to scan over the range of the regularization parameter to find one that works. This (a) adds to the computational time, and (b) since the dictionary is not known exactly, may guarantee recovery of coefficients only in terms of closeness in l2-norm sense, due to the error-in-variables (EIV) model for the dictionary. In this sense, NOODL is (a) simple to tune, (b) assures guaranteed recovery of both factors, and (c) is fast due to its geometric convergence properties. These factors highlight its applicability in practical DL problems. \n\n3. Definition of Hard Thresholding (HT) — As per the recommendation of the reviewer, we have repeated the definition of hard-thresholding (HT) initially presented in the \"Notation\" sub-section, in Section 2 for clarity.\n\n4. Comparison to other Online DL algorithms — As correctly observed by the reviewer, the overall structure of NOODL is similar to successful online DL algorithms. These successful algorithms (such as Mairal '09) leverage the progress made on both factors for convergence, however, do not guarantee recovery of the factors. On the other hand, the state-of-the-art provable DL algorithms focus on the progress made on only one of factors (the dictionary), and do not have good performance in practice, since they incur a non-negligible bias; see Section 5 and Appendix E. NOODL bridges the gap between these two. In addition to our main theoretical result, which establishes conditions for exact recovery of both factors at a geometric rate, NOODL also has superior empirical performance, leading to a neurally-plausible practical online DL algorithm with strong guarantees; see Section 3 and 4. Our work also paves way for the development and analysis of related alternating optimization-based techniques.\n\nOn reviewer's recommendation, we compare the performance of NOODL with one of the most popular alternating minimization-based online DL algorithm used in practice -- Mairal `09 -- in Fig. 2 and Table 4 (Appendix E). In this work, the authors show that alternating between a l1-based sparse approximation step and dictionary update based on block co-ordinate descent converges to a stationary point. The other comparable techniques shown in Table 1, are not ``online’’ and/or require stringent initializations, in terms of closeness to the true dictionary, as compared to NOODL. \n\nOur experiments show that due to the geometric convergence to the true factors, NOODL outperforms competing state-of-the-art provable online DL techniques both in terms of overall computational time, and convergence performance. These additional expositions further showcase the contributions of our work both on theoretical and practical online DL front. ", "We thank the reviewer for the comments. As correctly observed by the reviewer, Arora et. al. 2015 suffers from a bias in estimation both in the analysis and in the empirical evaluations. The source of this bias term is an irreducible error in the coefficient estimate (formed using the hard-thresholding step). NOODL overcomes this issue by introducing a iterative hard-thresholding (IHT)-based coefficient update step, which removes the dependence of the error in estimated coefficient on this irreducible error, and ultimately the dictionary estimate. \n\nIntuitively, this approach highlights the symbiotic relationship between the two unknown factors — the dictionary and the coefficients. In other words, to make progress on one, it is imperative to make progress on the other. To this end, in Theorem 1 we first show that the coefficient error only depends on the dictionary error (given an appropriate number of IHT iterations R), i.e. we remove the dependence on x_0 which is the source of bias in Arora et. al. 2015. We have added the intuition corresponding to this in the revised paper after the statement of Theorem 1 in Section 3. \n\nAnalysis of Computational Time — We have added the average per iteration time taken by various algorithms considered in our analysis in Table~4 and Appendix E. \nThe primary takeaway is that although NOODL takes marginally more time per iteration as compared to other methods when accounting for just one (Lasso-based) sparse recovery for coefficient update, it (a) is in fact faster per iteration since it does not involve any computationally expensive tuning procedure to scan across regularization parameters; owing to its geometric convergence property (b) achieves orders of magnitude superior error at convergence, and as a result, (c) overall takes significantly less time to reach such a solution; see Appendix E for details.\n\nWe would like to add that since NOODL involves simple separable update steps, this computation time can be further lowered by distributing the processing of individual samples across cores of a GPU (e.g. via TensorFlow) by utilizing the architecture shown in Fig. 1. We plan to release all the relevant code as a package in the future.\n\nIn this revision, we have added comparison to Mairal '09, a popular online DL algorithm. Further, we have also added a proof map, in addition to the Table 3, for easier navigation of the results.", "We would like to thank the reviewer for the comments and for raising some subtle yet important questions. We address and clarify specific comments below. We have also made corresponding changes in the revised paper, and have added a proof map, in addition to the Table 3, for easier navigation of the results. We have also added comparisons with Mairal `09, and experimental evaluation of computational time.\n\n1. Noise Tolerance — NOODL also has similar tolerance to noise as Arora et. al. 2015 and can be used in noisy settings as well. We focus on the noiseless case here to convey the main idea, since the analysis is already very involved. Nevertheless, the proposed algorithm can tolerate i.i.d. sub-Gaussian noise, including Gaussian noise and bounded noise, as long as the ``noise’’ is dominated by the ``signal’’. Under the noisy case, the recovered dictionary and coefficients will converge to a neighborhood of the true factors, where the neighborhood is defined by the properties of the additive noise. \n\nIn other words, the noise terms will lead to additional terms which will need to be controlled for the convergence analysis. Specifically, the noise will add a term to the coefficient update in Lemma 2, and will effect the threshold, tau. For the dictionary, the noise will result in additional terms in Lemma 9 (which ensures that the updated dictionary maintains the closeness property). A precise characterization of the relationship between the level of noise the size of convergence neighborhood requires careful analysis, which we defer to future effort.\n\n2. On eps_t and A.4. — Indeed, we don’t need to assume that eps_t is bounded. Specifically, using the result of Lemma 7, we have that eps_0 undergoes a contraction at every step, therefore, eps_t <= eps_0. For our analysis we fix eps_t = O^*(1/log(n)), which follows from the assumption on eps_0= O^*(1/log(n)) and Lemma 7. On reviewer’s comments, we have updated A.4., and moved the note about eps_t = O^*(1/log(n)) to the Appendix A.\n\n3. Exact recovery of factors — Also, we would like to point that NOODL recovers both the dictionary and coefficients exactly at a geometric rate. This means that as t—> infinity both the dictionary and coefficients estimates converge to the true factors without incurring any bias. We have added a clarification corresponding to this in the revised paper in Section 1.1 and after the statement of Theorem 1 in Section 3.", "The paper considers the problem of dictionary learning. Here the model that we are given samples y, where we know that y = Ax where A is a dictionary matrix, and x is a random sparse vector. The goal is typically to recover the dictionary A, from which one can also recover the x under suitable conditions on A. The paper shows that there is an alternating optimization-based algorithm for this problem that under standard assumptions provably converges exactly to the true dictionary and the true coefficients x (up to some negligible bias).\n\nThe main comparison with prior work is with [1]. Both give algorithms of this type for the same problem, with similar assumptions (although there is some difference; see below). In [1], the authors give two algorithms: one with a better sample complexity than the algorithm presented here, but which has some systematic, somewhat large, error floor which it cannot exceed, and another which can obtain similar rates of convergence to the exact solution, but which requires polynomial sample complexity (the explicit bound is not stated in the paper). The algorithm here seems to build off of the former algorithm; essentially replacing a single hard thresholding step with an IHT-like step. This update rule is able to remove the error floor and achieve exact recovery. However, this makes the analysis substantially more difficult. \n\nI am not an expert in this area, but this seems like a nice and non-trivial result. The proofs are quite dense and I was unable to verify them carefully.\n\nComments:\n\n- The analysis in [1] handles the case of noisy updates, whereas the analysis given here only works for exact updates. The authors claim that some amount of noise can be tolerated, but do not quantify how much.\n\n- A.4 makes it sound like eps_t needs to be assumed to be bounded, when all that is required is the bound on eps_0.\n\n[1] Arora, S. Ge, R., Ma, T. and Moitra, A. Simple, Efficient, and Neural Algorithms for Sparse Coding. COLT 2015.", "The paper deals with the problem of recovering an exact solution for both the dictionary and the activation coefficients. As other works, the solution is based on a proper initialization of the dictionary. The authors suggest using Aurora 2015 as a possible initialization. The contribution improves Arora 2015 in that it converges linearly and recovers both the dictionary and the coefficients with no bias.\n\nThe main contribution is the use of a IHT-based strategy to update the coefficients, with a gradient-based update for the dictionary (NOODL algorithm). The authors show that, combined with a proper initialization, this has exact recovery guaranties. Interestingly, their experiments show that NOODL converges linearly in number of iterations, while Arora gets stuck after some iterations.\n\nI think the paper is relevant and proposes an interesting contribution. The paper is well written and the key elements are in the body. However, there is a lot of important material in the Appendix, which I think may be relevant to the readers. It would be nice to have some more intuitive explanations at least of Theorem 1. Also, it is clear in the experiments the superiority with respect to Arora in terms of iterations (and error), but what about computational time?", "The main contributions of this work are essentially on the theoretical aspects. It seems that the proposed algorithm is not very original because its two parts, namely prediction (coefficient estimation) and learning (dictionary update) have been widely used in the literature, using respectively a IHT and a gradient descent. The authors need to describe in detail the algorithmic novelty of their work.\n\nThe definition of “recovering true factor exactly” need to be given. The proposed algorithm involves several tuning parameters, when alternating between two updating rules, an IHT-based update for coefficients and a gradient descent-based update for the dictionary. Therefore, an appropriate choice of their values need to be given.\n\nIn the algorithm, the authors need to define the HT function in (3) and (4).\n\nIn the experiments, the authors compare the proposed method to only the one proposed by Arora et al. 2015. We think that this is not enough, and more extensive experimental results would provide a better paper. \n\nThere are some typos that can be easily found, such as “of the out algorithm”." ]
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 2, 2, 2 ]
[ "B1x81eSE2m", "B1xtSy6t2Q", "SJgzc8XTnX", "iclr_2019_HJeu43ActQ", "iclr_2019_HJeu43ActQ", "iclr_2019_HJeu43ActQ" ]
iclr_2019_HJf9ZhC9FX
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization
Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to \emph{general} stochastic mirror descent (SMD) algorithms for \emph{general} loss functions and \emph{nonlinear} models. In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models. We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and \emph{implicit regularization} for over-parameterized linear models (in what is now being called the "interpolating regime"), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called "highly over-parameterized" nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning.
accepted-poster-papers
The authors give a characterization of stochastic mirror descent (SMD) as a conservation law (17) in terms of the Bregman divergence of the loss. The identity allows the authors to show that SMD converges to the optimal solution of a particular minimax filtering problem. In the special overparametrized linear case, when SMD is simply SGD, the result recovers a recent theorem due to Gunasekar et al. (2018). The consequences for the overparametrized nonlinear case are more speculative. The main criticisms are around impact, however, I'm inclined to think that any new insight on this problem, especially one that imports results from other areas like control, are useful to incorporate into the literature. I will comment that the discussion of previous work is wholly inadequate. The authors essentially do not engage with previous work, and mostly make throwaway citations. This is a real pity. I would be nice to see better scholarship.
val
[ "HJgikArTR7", "HyxHcqKsCX", "Skg_Iqwq37", "HJxIZOj5RX", "ryxsZIo507", "HJg0RVi5CQ", "ryxWiLXCh7", "S1gpoI__hQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their additional comments and have noted that they increased their score. We are not in disagreement with regards to SMD, or the reviewer's clarifying remarks about it. Furthermore, as also mentioned to Reviewer 3, we cannot comment on whether the implicit regularization of SMD is \"surprising\" or not.\n\nHowever, we do regretfully disagree with the reviewer that the paper's contributions are incremental.The reviewer bases this contention on their assertions that the implicit regularization of SMD is not surprising and that, as a machine learning researcher, they cannot appreciate the fundamental identity we show for SMD. We are not sure what to make of this last statement. The fundamental identity we show for SMD---both the local version in Lemma 4 and the global version in Lemma 5---can be regarded as a \"defining property\" of SMD, in the same way that our Eq (13), or Eqs (3.9) and (3.11) in the reference the reviewer cites, are defining properties of SMD. In other words, the SMD updates can be obtained from the identities in Lemmas 4 and 5, and therefore \"define\" the SMD updates. The advantage of these lemmas, especially Lemma 5, is that it gives a \"global\" interpretation of what SMD does, something that is not apparent---at all---from the defining local optimization of Eq (13) or the explicit update (15). It says something about what SMD is doing, and what quantities it is preserving, something which is not directly apparent from (13) or (15). It shows that the sum of D_{Li}(w,w_{i-1}), a certain measure of how well we are predicting the true parameter vector w, is bounded above by the sum of l(v_i), the loss of the noise. For quadratic loss, it upper bounds the energy of the prediction error by the energy of the noise.\n\nIn addition to yielding a novel interpretation for SMD, we show the utility of this fundamental identity, both to derive novel results, as well as to obtain more direct proofs of existing ones. We establish the minimax optimality of SMD, which generalizes the H-infinity optimality of SGD for linear models and quadratic loss. (Perhaps this is what the reviewer contends only the robust control community would appreciate. But, even if that were so---like it or not---this is a property of SMD that no other algorithms possess. It also can be interpreted in terms of the robustness of the algorithm in a manner we describe in the paper.) We further use the fundamental identity to give a deterministic proof of convergence for fixed step-size SMD in the over-parametrized case---something that had not been done before---and re-obtain implicit regularization in a very transparent way. The identity also allows us to say quite a bit in the over-parametrized nonlinear case (as happens in deep learning), and we outline this in Section 5.2. (The nonlinear case is currently under further investigation.) We have also used the fundamental identity to give a very direct proof of the stochastic convergence of SMD when the step size is vanishing and satisfies the Robbins-Monro conditions (this has been submitted to another venue).\n\nFurther, as mentioned by Reviewer 1, our fundamental identity raises the question of whether such identities can be found for other, perhaps more complicated, algorithms.\n\nAll this appears novel to us and we do not know what any of it has to do with being a machine learning researcher. SMD is used in machine learning and, in our view, new facts about it are expected to be of interest to machine learning researchers and practitioners.\n\nWe sincerely appreciate the reviewer's time and efforts in reading and evaluating our paper and value their comments. However, we had hoped the reviewer's recommendation would be based on objective facts, rather than subjective \"surprise\" and \"appreciation\".", "I thank the authors for having responded to my questions.\nThe description of mirror descent provided by the authors is correct. An alternative description of mirror descent is that it is similar to gradient descent with Bregman divergence induced by the strongly convex potential function. Look at Proposition 3.2 in the following paper which establishes this equivalence\nhttps://web.iem.technion.ac.il/images/user-files/becka/papers/3.pdf\nWith this viewpoint, I remarked (in my official review) that implicit regularization property of SMD algorithm is not surprising.\n\nWhile the fundamental identity the authors prove is interesting for robust control community, I as a machine learning researcher, find it hard to appreciate thi result. Modulo these, the contributions are very incremental. For this reason, I cannot recommend a strong acceptance.\n", "The authors look at SGD, and SMD updates applied to various models and loss functions. They derive a fundamental identity lemma 2 for the case of linear model and squared loss + SGD and in general for non-linear models+ SMD + non squared loss functions. The main results shown are\n1. SGD is optimal in a certain sense for squared loss and linear model.\n2. SGD always converges to a solution closest to the starting point.\n3. SMD when it converges, converges to a point closest to the starting point in the bregman divergence. The convergence of SMD iterates is shown for certain learning scenarios.\n\nPros: Shows implicit regularization properties for models beyond linear case.\nCons: 1. The notion of optimality is w.r.t. a metric that is pretty non-standard and it was not clear to me as to why the metric is important to study (the ratio metric in eq 9).\n2. The result is not very surprising since SMD is pretty much a gradient descent w.r.t a different distance metric. ", "We thank the reviewer for their feedback and acknowledging the positive aspects of our work. Our responses to the reviewer’s comments follow.\n \n>> (1) Several results are extended from existing literature. For example, Lemma 1 and Theorem 3 have analogues in (Hassibi et al. 1996). Proposition 8 is recently derived in (Gunasekar et al., 2018). Therefore, it seems that this paper has some incremental nature. I am not sure whether the contribution is sufficient enough.<< \n\nAs mentioned in the paper, our results differ from these results in several aspects.\nThe results on the fundamental identity and minimax optimality, e.g. (Hassibi et al. 1996; Kivinen at al., 2006), had never been shown in this generality, i.e., for general potential functions, general loss functions, and general models. In fact, it was not clear how to extend the results. The key insight here is that one needs to consider the Bregman divergence of the loss function. \n\nWhile an equivalent form of Proposition 8 has been shown in (Gunasekar et al., 2018), we would like to point out that (1) this result naturally follows from our fundamental identity, and (2) our approach readily proves (deterministic) convergence, too. (Gunasekar et al, 2018) just focus on the KKT conditions after convergence has happened. Their approach does not allow the study of convergence.\n \n>> (2) The authors say that they show the convergence of SMD in Proposition 9, while (Gunasekar et al., 2018) does not. It seems that the convergence may not be surprising since the interpolating case is considered there.<< \n\nWe cannot comment on whether convergence in the interpolating case is “surprising” or not. What we can comment on is that proving the convergence of SGD with fixed step size, even in the interpolating case, is not trivial. In fact, (Gunasekar et al., 2018) considered the linear interpolating case too; but to the best of our knowledge, there is no result about convergence in their paper. We further give conditions on the loss function (such as convexity, and even quasi-convexity) for SMD to converge in the linear interpolating case.\n \n>> (3) Implicit regularization is only studied in the over-parameterized case. Is it possible to say something in the general setting with noises?<< \n\nThe setting when we the model is not over-parametrized and there is noise, is not as simple. As we mention in the paper, when the model is not over-parameterized, SGD (or SMD) with fixed step size cannot converge. Therefore one cannot speak of implicit regularization when convergence does not happen. \n\nOf course, one can get convergence if the step size is allowed to vanish to zero. In this case, convergence is not surprising, since with a vanishing step size one essentially stops updating the solution after a while. What is more interesting is what one converges to. In work that has been submitted to another venue, we have used the same fundamental identity to show that for iid noise, SGD and SMD converge to the “true” parameter vector, provided the vanishing step size satisfies the so-called Robbins-Monro conditions. Our proof is very simple and direct and avoids ergodic averaging or appealing to stochastic differential equations, which is how the customary proofs go.\n \n>> (4) The discussion on the implicit regularization for over-parameterized case is a bit intuitive and based on strong assumptions, e.g., the first iterate is close to the solution set. It would be more interesting to present a more rigorous analysis with relaxed assumptions.<< \n\nWhile it would be nice---per the reviewer’s suggestion---to be able to prove convergence without the strong assumption that w_0 be close to the solution set, this may be a bit too ambitious and we are not sure how it can be done---or whether the statement is even true. We should reiterate our belief that this assumption is perhaps not too unrealistic in the highly over-parametrized case, because when the parameters are initialized at random around zero, w.h.p., the initial point will be close to the solution set (which is a very high-dimensional manifold). We have significantly expanded our discussions of the highly over-parametrized nonlinear case in Sec 5.2., with the hope of making the arguments more clear, all while acknowledging the fact that they are somewhat heuristic in nature. \n", "We thank the reviewer for their constructive feedback and for acknowledging the pros of the work. With respect to the two cons mentioned by the reviewer, we would like to make the following points.\n \n>> 1. The notion of optimality is w.r.t. a metric that is pretty non-standard and it was not clear to me as to why the metric is important to study (the ratio metric in eq 9).<< \n\nWhile the metric in (9) may be unfamiliar to the learning community, it is known in the estimation theory and control literature, and is in fact the H^{\\infty} norm (maximum energy gain) of the transfer operator that maps the unknown disturbances to the prediction errors. H^{\\infty} theory was developed to allow the design of estimators and controllers that were robust to model and disturbance uncertainty. There are connections to online learning (that have not yet been fully explored) and we remark on this in the footnote of Section 3.2. Furthermore, extending the minimax optimality results of (Hassibi et al 1996) and (Kivinen et al 2006) to general loss functions and nonlinear models had remained open and our paper shows that the correct way to formulate the minimax problem is through the Bregman divergence of the loss. Finally, the minimax optimality results of SGD and SMD can be regarded as the global defining properties of these algorithms. They are usually defined through some local optimization and/or update and it is not clear what they are doing globally---whether they are optimizing anything globally. Our results show what it is that they globally optimize.\n \n>> 2. The result is not very surprising since SMD is pretty much a gradient descent w.r.t a different distance metric.<< \n\nStochastic mirror descent (SMD) is a popular family of algorithms, which includes stochastic gradient descent (SGD) as a special case (when the potential function is the squared l2 norm), and has been studied in many papers, e.g. (Nemirovskii et al., 1983; Beck & Teboulle, 2003; Cesa-Bianchi et al., 2012; Zhou et al., 2017; Zhang and He, 2018; etc.). While each step of SMD can be viewed as transforming the variable $w$ with a mirror map, to $\\nabla\\psi(w)$, and adding the instantaneous gradient update to that variable, the updates are NOT the gradient with respect to that new variable, and therefore, it is not “gradient descent w.r.t. a different metric.” In fact, when the step size is very small, one can show that SMD updates the $w$ vector, not by the instantaneous gradient, but rather by the product of the inverse Hessian of the potential and the instantaneous gradient. \n\nFinally, for clarity, we would like to summarize the contributions of this work:\n\n1. We show that there exists a “fundamental identity” (i.e., a conservation law) which holds for SMD (and SGD) under very general conditions.\n\n2. Using this identity, we show that, for general nonlinear models and general loss functions, when the step size is sufficiently small, SMD (and SGD) are the optimal solution of a certain minimax filtering problem. This generalizes several results from the robust control theory literature, e.g., (Hassibi et al., 1994; Kivinen at al., 2006.)\n\n3. We show that many properties recently proven in the literature, such as the “implicit regularization” of SMD (and SGD) in the over-parameterized linear case---when convergence happens---(Gunasekar et al., 2018), naturally follow from this theory. The theory also allows us to establish new results, such as the convergence (in a deterministic sense) of SMD (and SGD) in the over-parameterized linear case.\n\n4. We finally also use the theory developed in this paper to provide some speculative arguments into why SMD (and SGD) may have similar convergence and implicit regularization properties in the so-called ``highly over-parameterized'' nonlinear setting common to deep learning.\n", "We thank the reviewer for their supportive feedback and comments. We agree that it would be nice to see whether invariant relationships of the type we have found for SMD were to hold for more complicated iterative algorithms---we are currently investigating this. To the best of our abilities, we have made every attempt to clarify the paper and add more explanations and detailed discussions (as permitted by the page limitation). We hope this removes the barriers to a higher score. Now to the specific comments:\n \n>> 1. Can the authors explain how is the minimax optimality result of Theorem 6 (and Corollary 7) related to the main result of the paper which is probably Proposition 8 and and 9? Is that minimax optimiality a different insight separate from the main line of the arguments (which I believe is Proposition 8 and 9)?<< \n\nYes, we consider the minimax optimality (Theorem 6) as a separate insight. It gives a new interpretation to SMD and shows the manner in which it is robust to uncertainty about the true parameter vector and the model of the noise sequence. It derives from the same identity (18), and extends known results in the estimation theory literature (e.g., Hassibi et al.; 1996, Kivinen at al., 2006) to general SMD algorithms with general potential and general loss.\n \n>> 2. Is the gain in Proposition 9 over Proposition 8 is all about using loss convexity to ensure that the SMD converges and w_\\infty exists?<< \n\nYes, that is correct.\n \n>> 3. The paper has highly insufficient comparisons to many recent other papers on the idea of \"implicit bias\" like, https://arxiv.org/abs/1802.08246, https://arxiv.org/abs/1806.00468 and https://arxiv.org/abs/1710.10345. It seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes.<<\n\nThank you for pointing out the above references---we have added them all. We also provide a brief comparison to our results (see Sec 1.1 “Our Contributions”, as well as the discussion below Proposition 8). The main difference is that our techniques allow a (deterministic) proof of convergence of SMD for the regression problem, which was not given in prior papers (implicit regularization was shown if convergence happens).\n", "This is a very interesting paper and it suggests a novel way to think of \"implicit regularization\". The power of this paper lies in its simplicity and its inspiring that such almost-easy arguments could be made to get so much insight. It suggests that minimizers of the Bregrman divergence are an alternative characterization of the asymptotic end-points of \"Stochastic Mirror Descent\" (SMD) when it converges. So choice of the strongly convex potential function in SMD is itself a regularizer! \n\nIts a very timely paper given the increasing consensus that \"implicit regularization\" is what drives a lot of deep-learning heuristics. This paper at its technical core suggests a modified notion of Bregman-like divergence (equation 15) which on its own does not need a strongly convex potential. Then the paper goes on to show that there is an invariant of the iterations of SMD along its iterations which involves a certain relationship (equation 18) between the usual Bregman divergence and their modified divergence. I am eager to see if such relationships can be shown to hold for more complicated iterative algorithms! \n\nBut there are a few points in the paper which are not clear and probably need more explanation and let me list them here. ( and these are the issues that prevent me from giving this paper a very high rating despite my initial enthusiasm )\n\n1. \nCan the authors explain how is the minimax optimality result of Theorem 6 (and Corollary 7) related to the main result of the paper which is probably Proposition 8 and and 9? Is that minimax optimiality a different insight separate from the main line of the arguments (which I believe is Proposition 8 and 9)? \n\n2.\nIs the gain in Proposition 9 over Proposition 8 is all about using loss convexity to ensure that the SMD converges and w_\\infty exists? \n\n3. \nThe paper has highly insufficient comparisons to many recent other papers on the idea of \"implicit bias\" like, https://arxiv.org/abs/1802.08246, https://arxiv.org/abs/1806.00468 and https://arxiv.org/abs/1710.10345. It seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes. ", "Optimization algorithms such as stochastic gradient descent (SGD) and stochastic mirror descent (SMD) have found wide applications in training deep neural networks. In this paper the authors provide some theoretical studies to understand why SGD/SMD can produce a solution with good generalization performance when applied to high-parameterized models. The authors developed a fundamental identity for SGD with least squares loss function, based on which the minimax optimality of SGD is established, meaning that SGD chooses the best estimator that safeguards against the worst-case disturbance. Implicit regularization of SGD is also established in the interpolating case, meaning that SGD iterates converge to the one with minimal distance to the starting point in the set of models with no errors. Results are then extended to SMD with general loss functions.\n\nComments:\n\n(1) Several results are extended from existing literature. For example, Lemma 1 and Theorem 3 have analogues in (Hassibi et al. 1996). Proposition 8 is recently derived in (Gunasekar et al., 2018). Therefore, it seems that this paper has some incremental nature. I am not sure whether the contribution is sufficient enough.\n\n(2) The authors say that they show the convergence of SMD in Proposition 9, while (Gunasekar et al., 2018) does not. It seems that the convergence may not be surprising since the interpolating case is considered there.\n\n(3) Implicit regularization is only studied in the over-parameterized case. Is it possible to say something in the general setting with noises?\n\n(4) The discussion on the implicit regularization for over-parameterized case is a bit intuitive and based on strong assumptions, e.g., the first iterate is close to the solution set. It would be more interesting to present a more rigorous analysis with relaxed assumptions." ]
[ -1, -1, 5, -1, -1, -1, 7, 5 ]
[ -1, -1, 3, -1, -1, -1, 4, 3 ]
[ "HyxHcqKsCX", "ryxsZIo507", "iclr_2019_HJf9ZhC9FX", "S1gpoI__hQ", "Skg_Iqwq37", "ryxWiLXCh7", "iclr_2019_HJf9ZhC9FX", "iclr_2019_HJf9ZhC9FX" ]
iclr_2019_HJfSEnRqKQ
Active Learning with Partial Feedback
While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback). To annotate examples corpora for multiclass classification, we might need to ask multiple yes/no questions, exploiting a label hierarchy if one is available. To address this more realistic setting, we propose active learning with partial feedback (ALPF), where the learner must actively choose both which example to label and which binary question to ask. At each step, the learner selects an example, asking if it belongs to a chosen (possibly composite) class. Each answer eliminates some classes, leaving the learner with a partial label. The learner may then either ask more questions about the same example (until an exact label is uncovered) or move on immediately, leaving the first example partially labeled. Active learning with partial labels requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels between rounds. Experiments on Tiny ImageNet demonstrate that our most effective method improves 26% (relative) in top-1 classification accuracy compared to i.i.d. baselines and standard active learners given 30% of the annotation budget that would be required (naively) to annotate the dataset. Moreover, ALPF-learners fully annotate TinyImageNet at 42% lower cost. Surprisingly, we observe that accounting for per-example annotation costs can alter the conventional wisdom that active learners should solicit labels for hard examples.
accepted-poster-papers
This paper is on active deep learning in the setting where a label hierarchy is available for multiclass classification problems: a fairly natural and pervasive setting. The extension where the learner can ask for example labels as well as a series of questions to adequately descend the label hierarchy is an interesting twist on active learning. The paper is well written and develops several natural formulations which are then benchmarked on CIFAR10, CIFAR100, and Tiny ImageNet using a ResNet-18 architecture. The empirical results are carefully analyzed and appear to set interesting new baselines for active learning.
train
[ "rJg1hu9JCX", "rJl_6OcJCm", "H1eMytqJ0Q", "SkxYxO9kRX", "B1xh9IemTX", "BklDVW9F2X", "SkgbODq_3X" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their thoughtful feedback and clear recommendation to accept. We were glad to see that you found the paper to be well-articulated and easy to read. \n\nPer your feedback, we will bring up the related work (currently in section 4) and cite it throughout as each prior technical idea is introduced. Regarding the related work on partial labels are you referring to the three papers we cite later on (Grandvalet & Bengio, 2004; Nguyen & Caruana, 2008; Cour et al., 2011) or others that we missed? Please let us know if you know of other related references and we’ll be happy to add any missing citations.\n\nWe agree that the choice of approaches in this paper is straightforward and meant to emphasize the importance of a novel problem setting as well as compelling experimental results. We also agree that a great next step for this work would be to establish theoretical guarantees for active learning with partial labels. ", "We thank the reviewer for their thoughtful feedback and were glad to see that you found our proposed setting to be both interesting and important. We would like to respond to your concerns briefly:\n\nFirst, concerning your questions:\n***Re the failure of vanilla active learning***\nSince theoretical analysis guaranteeing the performance of active + deep learning has yet to be established, it’s hard to say *why* vanilla uncertainty-sampling-based active learning doesn’t work so well when applied on image classification datasets with convolutional neural networks. However, we are not the first to find this. Take for example the results Active Learning for Convolutional Neural Networks: A Core Set Approach (https://arxiv.org/pdf/1708.00489.pdf), which was published at ICLR 2018, where uncertainty sampling and even the more recent deep Bayesian active learning by disagreement perform no better than random on CIFAR 10 and only marginally better for CIFAR 100. In contrast, vanilla AL strategies have demonstrated promise on a number of NLP tasks (e.g. https://arxiv.org/pdf/1808.05697.pdf).\n\n***Re the taxonomy of labels***\nWhile tree-structured taxonomies are especially convenient, our methods do not in principle depend specifically on tree structure, requiring only a list of composite labels. One can draw a parallel to general formulations of the game twenty questions where the available set of questions needn’t form tree. We thank the reviewer for the suggestion for future work and plan to evaluate our methods on with label ontologies like the MeSH labels (medical subject headings) used to annotate biomedical articles that do not form a strict tree hierarchy (some nodes have multiple parents). \n\nRegarding theoretical guarantees, we agree with the reviewer that establishing theoretical guarantees for active learning with partial labels is an especially exciting direction and plan to pursue future work in this direction. We note that generally there is a considerable gap between the theory of active learning and the practical methods established to cope with high dimensional data and modern classifiers and hope to close this gap in the future with rigorous analysis. \n", "Thanks for your feedback. We are glad that you appreciated the usefulness of the setup, the soundness of the experiments, and the insights of the results. We are also grateful for your thoughtful questions and recommendations.\n\n1) Yes, the t+1 is a mistake. Thanks for the catch! We will fix this in the camera ready version of the paper. \n\n2) A standard multi-class classifier cannot make use of the partially labeled data. The very purpose of these initial experiments was to establish as a sanity check that our setup for learning from partial labels with neural networks works in the first place (before adding the complexity of active learning). The point is to show that the model gets additional predictive performance as compared to if it only relied on the subset of data that had been fully annotated. \n\n3) One key feature of ALPF is that a better algorithm identifies the correct label with a smaller number of (binary) questions. To compute the number of questions asked, we *record* the number of queries required to conclusively identify the label of every example. Note that this requires at least 1 question for each example, but may be much faster than the naive approach of drilling through the whole label hierarchy fresh for each example.\n\nOur experiments compare all three acquisition strategies with AQ (EIG, ERC, and EDC). The difference between AQ and ALPF is that AQ selects examples i.i.d., and chooses only which (possibly composite) label to query. By contrast, ALPF at each round selects both the example and the label, possibly moving on to a new example and leaving the previous example with a partial label.\n\nThere are two relevant observations to the reviewer’s question. On Tiny ImageNet, ERC ends up spending the first 60K (the first two batch after warm-up) questions on just 32K distinct examples while EDC ends up querying 51K distinct examples. As we can see in Figure 2 (and not surprisingly), ERC obtains more exactly labeled examples early on, while EDC has less remaining classes overall. The fact that EDC consistently outperforms ERC early on suggests that given a very limited budget it might be better to coarsely but strategically annotate a larger dataset than to focus on obtaining more granular labels. How precisely this translates into improved classification performance is an interesting question and warrants deeper theoretical inquiry.\n", "We would like to thank all three reviewers for their thoughtful and detailed reviews. Overall, we were glad to see a consensus to accept the paper, with the reviews emphasizing the importance and novelty of our proposed problem setting, and the strength of our experimental work. As we continue to improve the draft, we will incorporate the constructive feedback from each reviewer. Please find replies to each review below in the respective threads.", "The authors introduce a new Active Learning setting where instead of querying for a label for a particular example, the oracle offers a partial or weak label. This leads to a simpler and more natural way of retrieving this information that can be of use many applications such as image classification. \n\nThe paper is well-written and very easy to follow. The authors first present the overview of the learning scenario and then suggest three sampling strategies based on the existing AL insights (expected information gain, expected remaining classes, expected decrease in classes). \n\nAs the labels that the algorithm has to then use are partial, they make use of a standard algorithm to learn from partial labels -- namely, minimizing a partial log loss. It would be nice to properly reference related methods in the literature in Sec. 2.1.\n\nThe way of solving both the learning from partial labels and the sampling strategies are not particularly insightful. Also, there is a lack of theoretical guarantees to show value of a partial label as compared to the true label. However, as these are not the main points of the paper (introduction of a novel learning setting), I see these as minor concerns.\n\n", "This paper proposes active learning with partial feedback, which means at each step, the learner actively chooses both which example to label and which binary question to ask, then learn the multi-class classifier with these partial labels. Three different sampling strategies are used during active learning. Experimental results demonstrate that the proposed ALPF strategy outperforms existing baselines on the predicting accuracy under a limited budget.\n\nThis paper is well-written. The main ideas and claims are clearly expressed. ALPF combines active learning with learning from partial labels. This setting is interesting and important, especially when the number of categories is large and share some hierarchical structure. The experimental results are promising. My main concern about this work is the lack of theoretical guarantees, which is usually important for active learning paper. it’s better to provide some analysis on the efficiency of ALPF to further improve the quality of the paper.\nI have the following questions for the authors:\n+Why vanilla active learning strategy does not work well? Which uncertainty measurement do you use here?\n+The performances of this work heavily rely on the taxonomy of labels, while in some cases the taxonomy of labels is not tree structure but a graph, i.e. a label may belong to multiple hyper-labels. Can ALPF still work on these cases?\n", "The paper considers a multiclass classification problem in which labels are grouped in a given number M of subsets c_j, which contain all individual labels as singletons. Training takes place through an active learning setting in which all training examples x_i are initially provided without their ground truth labels y_i. The learner issues queries of the form (x_i,c_j) where c_j is one of the given subsets of labels. The annotator only replies yes/no according to whether the true label y_i of x_i belongs to c_j or not. Hence, for each training example the learner maintains a \"version space\" containing all labels that are consistent with the answers received so far for that example. The active learning process consists of the following steps: (1) use the current learning model to score queries (x_i,c_j); (2) query the best (x_i,c_j); (3) update the model.\nIn their experiments, the authors use a mini-batched version, where queries are issued and re-ranked several times before updating the model. Assuming the learner generates predictive models which map examples to probability distributions over the class labels, several uncertainty measures can be used to score queries: expected info gain, expected remaining classes, expected decrease in remaining classes. Experiments are run using the Res-18 neural network architecture over CIFAR10, CIFAR100, and Tiny ImageNet, with training sets of 50k, 50k, and 100k examples. The subsets c_j are computed using the Wordnet hierarchy on the label names resulting in 27, 261, and 304 subsets for the three datasets. The experiments show the advantage of performing adaptive queries as opposed to several baselines: random example selection with binary search over labels, active learning over the examples with binary search over the labels, and others. \n\nThis paper develops a natural learning strategy combining two known approaches: active learning and learning with partial labels. The main idea is to exploit adaptation in both choosing examples and queries. The experimental approach is sound and the results are informative. In general, a good experimental paper with a somewhat incremental conceptual contribution.\n\nIn (2) there is t+1 on the left-hand side and t on the right-hand side, as if it were an update. Is it a typo?\n\nIn 3.1, how is the standard multiclass classifier making use of the partially labeled examples during training?\n\nHow are the number of questions required to exactly label all training examples computed? Why does this number vary across the different methods?\n\nWhat specific partial feedback strategies are used by AQ for labeling examples?\n\nEDC seems to consistently outperform ERC for small annotation budgets. Any intuition why this happens?" ]
[ -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "B1xh9IemTX", "BklDVW9F2X", "SkgbODq_3X", "iclr_2019_HJfSEnRqKQ", "iclr_2019_HJfSEnRqKQ", "iclr_2019_HJfSEnRqKQ", "iclr_2019_HJfSEnRqKQ" ]