paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_ryQu7f-RZ
On the Convergence of Adam and Beyond
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
accepted-oral-papers
This paper analyzes a problem with the convergence of Adam, and presents a solution. It identifies an error in the convergence proof of Adam (which also applies to related methods such as RMSProp) and gives a simple example where it fails to converge. The paper then repairs the algorithm in a way that guarantees convergence without introducing much computational or memory overhead. There ought to be a lot of interest in this paper: Adam is a widely used algorithm, but sometimes underperforms SGD on certain problems, and this could be part of the explanation. The fix is both principled and practical. Overall, this is a strong paper, and I recommend acceptance.
test
[ "HkhdRaVlG", "H15qgiFgf", "Hyl2iJgGG", "BJQcTsbzf", "HJXG6sWzG", "H16UnjZMM", "ryA-no-zz", "HJTujoWGG", "ByhZijZfG", "SkjC2Ni-z", "SJXpTMFbf", "rkBQ_QuWf", "Sy5rDQu-z", "SJRh-9lef", "Bye7sLhkM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public" ]
[ "The paper presents three contributions: 1) it shows that the proof of convergence Adam is wrong; 2) it presents adversarial and stochastic examples on which Adam converges to the worst possible solution (i.e. there is no hope to just fix Adam's proof); 3) it proposes a variant of Adam called AMSGrad that fixes the problems in the original proof and seems to have good empirical properties.\n\nThe contribution of this paper is very relevant to ICLR and, as far as I know, novel.\nThe result is clearly very important for the deep learning community.\nI also checked most of the proofs and they look correct to me: The arguments are quite standard, even if the proofs are very long.\n\nOne note on the generality of the results: the papers states that some of the results could apply to RMSProp too. However, it has been proved that RMSProp with a certain settings of its parameters is nothing else than AdaGrad (see Section 4 in Mukkamala and Hein, ICML'17). Hence, at least for a certain setting of its parameters, RMSProp will converge. Of course, the proof in the ICML paper could be wrong, I did not check that...\n\nA general note on the learning rate: The fact that most of these algorithms are used with a fixed learning rate while the analysis assume a decaying learning rate should hint to the fact that we are not using the right analysis. Indeed, all these variants of AdaGrad did not really improve the AdaGrad's regret bound. In this view, none of these algorithms contributed in any meaningful way to our understanding of the optimization of deep networks *nor* they advanced in any way the state-of-the-art for optimizing convex Lipschitz functions.\nOn the other hand, analysis of SGD-like algorithms with constant step sizes are known. See, for example, Zhang, ICML'04 where linear convergence is proved in a neighbourhood of the optimal solution for strongly convex problems.\nSo, even if I understand this is not the main objective of this paper, it would be nice to see a discussion on this point and the limitations of regret analysis to analyse SGD algorithms.\n\nOverall, I strongly suggest to accept this paper.\n\n\nSuggestions/minor things:\n- To facilitate the reader, I would state from the beginning what are the common settings of beta_1 and beta_2 in Adam. This makes easier to see that, for example, the condition of Theorem 2 is verified.\n- \\hat{v}_{0} is undefined in Algorithm 2.\n- The graphs in figure 2 would gain in readability if the setting of each one of them would be added as their titles.\n- McMahan and Streeter (2010) is missing the title. (Also, kudos for citing both the independent works on AdaGrad)\n- page 11, last equation, 2C-4=2C-4. Same on page 13.\n- Lemma 4 contains x_1,x_2,z_1, and z_2: are x_1 and z_1 the same? also x_2 and z_2?", "This work identifies a mistake in the existing proof of convergence of\nAdam, which is among the most popular optimization methods in deep\nlearning. Moreover, it gives a simple 1-dimensional counterexample with\nlinear losses on which Adam does not converge. The same issue also\naffects RMSprop, which may be viewed as a special case of Adam without\nmomentum. The problem with Adam is that the \"learning rate\" matrices\nV_t^{1/2}/alpha_t are not monotonically decreasing. A new method, called\nAMSGrad is therefore proposed, which modifies Adam by forcing these\nmatrices to be decreasing. It is then shown that AMSGrad does satisfy\nessentially the same convergence bound as the one previously claimed for\nAdam. Experiments and simulations are provided that support the\ntheoretical analysis.\n\nApart from some issues with the technical presentation (see below), the\npaper is well-written.\n\nGiven the popularity of Adam, I consider this paper to make a very\ninteresting observation. I further believe all issues with the technical\npresentation can be readily addressed.\n\n\n\nIssues with Technical Presentation:\n\n- All theorems should explicitly state the conditions they require\n instead of referring to \"all the conditions in (Kingma & Ba, 2015)\".\n- Theorem 2 is a repetition of Theorem 1 (except for additional\n conditions).\n- The proof of Theorem 3 assumes there are no projections, so this\n should be stated as part of its conditions. (The claim in footnote 2\n that they can be handled seems highly plausible, but you should be up\n front about the limitations of your results.)\n- The regret bound Theorem 4 establishes convergence of the optimization\n method, so it plays the role of a sanity check. However, it is\n strictly worse than the regret bound O(sqrt{T}) for online gradient\n descent [Zinkevich,2003], so it cannot explain why the proposed\n AMSgrad method might be adaptive. (The method may indeed be adaptive\n in some sense; I am just saying the *bound* does not express that.\n This is also not a criticism of the current paper; the same remark\n also applies to the previously claimed regret bound for Adam.)\n- The discussion following Corollary 1 suggests that sum_i\n hat{v}_{T,i}^{1/2} might be much smaller than d G_infty. This is true,\n but we should always expect it to be at least a constant, because\n hat{v}_{t,i} is monotonically increasing by definition of the\n algorithm, so the bound does not get better than O(sqrt(T)).\n It is also suggested that sum_i ||g_{1:T,i}|| = sqrt{sum_{t=1}^T\n g_{t,i}^2} might be much smaller than dG_infty, but this is very\n unlikely, because this term will typically grow like O(sqrt{T}),\n unless the data are extremely sparse, so we should at least expect\n some dependence on T.\n- In the proof of Theorem 1, the initial point is taken to be x_1 = 1,\n which is perfectly fine, but it is not \"without loss of generality\",\n as claimed. This should be stated in the statement of the Theorem.\n- The proof of Theorem 6 in appendix B only covers epsilon=1. If it is\n \"easy to show\" that the same construction also works for other\n epsilon, as claimed, then please provide the proof for general\n epsilon.\n\n\nOther remarks:\n\n- Theoretically, nonconvergence of Adam seems a severe problem. Can you\n speculate on why this issue has not prevented its widespread adoption?\n Which factors might mitigate the issue in practice?\n- Please define g_t \\circ g_t and g_{1:T,i}\n- I would recommend sticking with standard linear algebra notation for\n the sqrt and the inverse of a matrix and simply using A^{-1} and\n A^{1/2} instead of 1/A and sqrt{A}.\n- In theorems 1,2,3, I would recommend stating the dimension (d=1) of\n your counterexamples, which makes them very nice!\n\nMinor issues:\n\n- Check accent on Nicol\\`o Cesa-Bianchi in bibliography.\n- Near the end of the proof of Theorem 6: I believe you mean Adam\n suffers a \"regret\" instead of a \"loss\" of at least 2C-4.\n Also 2C-4=2C-4 is trivial in the second but last display.\n", "This paper examines the very popular and useful ADAM optimization algorithm, and locates a mistake in its proof of convergence (for convex problems). Not only that, the authors also show a specific toy convex problem on which ADAM fails to converge. Once the problem was identified to be the decrease in v_t (and increase in learning rate), they modified the algorithm to solve that problem. They then show the modified algorithm does indeed converge and show some experimental results comparing it to ADAM.\n\nThe paper is well written, interesting and very important given the popularity of ADAM. \n\nRemarks:\n- The fact that your algorithm cannot increase the learning rate seems like a possible problem in practice. A large gradient at the first steps due to bad initialization can slow the rest of training. The experimental part is limited, as you state \"preliminary\", which is a unfortunate for a work with possibly an important practical implication. Considering how easy it is to run experiments with standard networks using open-source software, this can easily improve the paper. That being said, I understand that the focus of this work is theoretical and well deserves to be accepted based on the theoretical work.\n\n- On page 14 the fourth inequality not is clear to me.\n\n- On page 6 you talk about an alternative algorithm using smoothed gradients which you do not mention anywhere else and this isn't that clear (more then one way to smooth). A simple pseudo-code in the appendix would be welcome.\n\nMinor remarks:\n- After the proof of theorem 1 you jump to the proof of theorem 6 (which isn't in the paper) and then continue with theorem 2. It is a bit confusing.\n- Page 16 at the bottom v_t= ... sum beta^{t-1-i}g_i should be g_i^2\n- Page 19 second line, you switch between j&t and it is confusing. Better notation would help.\n- The cifarnet uses LRN layer that isn't used anymore.", "We thank the reviewer for very helpful and constructive feedback. \n\nAbout Mukkamala and Hein 2017 [MH17]: Thanks for pointing this paper. As the anonymous reviewer rightly points out, the [MH17] does not look at the standard version of RMSProp but rather a modification and thus, there is no contradiction with our paper. We will make this point clear in the final version of the paper.\n\nRegarding note about learning rate: While it is true that none of these new rates improve upon Adagrad rates, in fact, in the worst case one cannot improve the regret of standard online gradient descent in general convex setting. Adagrad improves this in the special case of sparse gradients (see for instance, Section 1.3 of Duchi et al. 2011). However, these algorithms, which are designed for specific convex settings, appear to perform reasonably well in the nonconvex settings too (especially in deep networks). Exponential moving average (EMA) variants seem to further improve the performance in the (dense) nonconvex setting. Understanding the cause for good performance in nonconvex settings is an interesting open problem. Our aim was to take an initial step to develop more principled EMA approaches. We will add a description in the final version of the paper.\n\nLemma 4: Thanks for pointing it out and sorry for the confusion. Indeed, x1 = z1 and x2 = z2. We have corrected this typo.\n\nWe have also revised the paper to address the minor typos mentioned in the review.", "We deeply appreciate the reviewer for a thorough and constructive feedback. \n\n- Theorem 2 & 3 are much more involved and hence the aim of Theorem 1 was to provide a simplified counter-example for a restrictive setting, thereby providing the key ideas of the paper.\n- We will emphasize your point about projections in the final version of the paper.\n- We agree that the role of Theorem 4 right now is to provide a sanity check. Indeed, it is not possible to improve upon the of online gradient descent in the worst case convex settings. Algorithms such as Adagrad exploit structure in the problem such as sparsity to provide improved regret bounds. Theorem 4 provides some adaptivity to sparsity of gradients (but note that these are upper bounds and it is not clear if they are tight). Adaptive methods seem to perform well in few non-sparse and nonconvex settings too. It remains open to understand it in the nonconvex settings of our interest. \n- Indeed, there is a typo; we expect ||g{1:T,i}|| to grow like sqrt(T). The main benefit in adaptive methods comes in terms of sparsity (and dimension dependence). For example see Section 1.3 in Duchi et al. 2011). We have revised the paper to incorporate these changes.\n- We can indeed assume that x_1 = 1 (without loss of generality) because for any choice of initial point, we can always translate the function so that x_1 = 1 is the initial point in the new coordinate system. We will add a discussion about this in the final version of the paper.\n- The last part of Theorem 6 explains the reduction with respect to general epsilon. We will further highlight this in the final version of the paper.\n\nOther remarks:\n\nRegarding widespread adoption of Adam: It is possible that in certain applications the issues we raised in this work are not that severe (although they can still lead to degradation in generalization performance). On the contrary, there exist a large number of real-world applications, for instance training models with large output spaces, which suffer from the issues we have highlighted and non-convergence has been observed to occur more frequently. Often, this non-convergence is attributed to nonconvexity but our paper shows one of the causes that applies even to convex settings. \nAs stated in the paper, using a problem specific large beta2 seems to help in some applications. Researchers have developed many tricks (such as gradient clipping) which might also play a role in mitigating these issues. We propose two different approaches to fix this issue and it will be interesting to investigate these approaches in various applications.\n\nWe have addressed all other minor concerns directly in the revision of the paper.\n", "Thanks David, for your interest in this paper and helpful comments (and pointers). We have addressed your concerns regarding typos in the latest revision of the paper.\n", "Thanks for your interest in our paper and for your feedback. We believe that beta1 is not an issue for convergence of Adam (although our theoretical analysis assumes a decreasing beta1). For example, in stochastic convex optimization, momentum based methods have been shown to converge even for constant beta1. That said, it is indeed interesting to develop better understanding of the effect of momentum in convergence of these algorithms (especially in the nonconvex setting).\n\nAs the paper shows, for any constant beta2, there exists a counter-example for non-convergence of Adam (both in online as well as stochastic setting, Theorem 2 & Theorem 3). Using a large beta2 can partially mitigate this issue in practice but it is not clear how high beta2 should be and it is indeed an interesting research question. Our paper proposes a couple of approaches (AMSGrad & AdamNC) for addressing these issues. AMSGrad allows us to use a fixed beta2 by changing the structure of the algorithm (and also allows us to use a much slow decaying learning rate than Adagrad). AdamNC looks at an approach where beta2 changes with t, ultimately converging to 1, hopefully allowing us to retain the benefits of Adam but at the same time circumventing its non-convergence.\n\nThe aim of synthetic experiments was to demonstrate the effect of non-convergence. We can modify it to demonstrate similar problem for any constant beta2.\n", "(1) Thanks for the interest in our paper and looking into the analysis carefully. We believe there is a misunderstanding regarding the proof. The third inequality follows from the lower bound v_{t+i-1} \\ge (1-\\beta)\\beta^{i-1}_2 C^2. The fourth inequality actually follows from the upper bound on v_{t+i-1} (which implicitly uses \\beta^{i'-1}_2 C^2 \\le 1). We revised the paper to provide the detailed derivation, including specifying precise constants that were previously omitted.\n\n(2) Actually, an easy observation from our analysis is that we can bound the regret of AMSGrad by O(G_infty sqrt(T)) as well. This can be easily seen from the proof of Lemma 2 where in the analysis the term \\sum_{t=1}^T |g_{t,i}/\\sqrt{t} can be bounded by O(G_infty sqrt(T)) instead of O(\\sqrt{\\log(T) ||g_{1:T}||_2). Thus, the regret of AMSGrad is upper bounded by minimum of O(G_infty sqrt(T)) and the bound presented in Theorem 4, and thus the worst case dependence on T is \\sqrt{T} rather than \\sqrt{T \\log(T)}. We will make this point in the final version of the paper.", "We thank the reviewer for the helpful and supportive feedback. The focus of the paper is to provide a principled understanding for the exponential moving average (EMA) adaptive optimization methods, which are now used as building blocks of many modern deep learning applications. The counter-example for non-convergence we show is very natural and is observed to arise in extremely sparse real-world problems (e.g., pertaining to problems with large output spaces). We provided two general directions to address the convergence issues in these algorithms (by either changing the structure of the algorithm or by gradually increasing beta2 as algorithm proceeds). We have provided preliminary experiments on a few commonly used networks & datasets but we do agree that a thorough empirical study will be very useful and is part of our future plan. \n\n- Fourth inequality on Page 14: We revised the paper to explain it further.\n- We will be happy to elaborate our comment about smoothed gradients in the final version of the paper.\n- We also addressed other minor suggestions.\n", "Dear authors,\n\nIt's a very good paper, but I have some questions as follows:\n\n(1) In the last paragraph on Page 14, it says the fourth inequality is from $\\beta^{i'-1}_2 C^2 \\le 1$, but I couldn't go through from the third inequality to the fourth inequality on Page 14. It seems that you applied the lower bound of $v_{t+i-1}$ (i.e. $v_{t+i-1} \\ge (1-\\beta)\\beta^{i-1}_2 C^2$ which is not desired) instead of its upper bound (which is truly required)? \n\n(2) In Corollary 1, from my understanding, the L2 norm of $g_{1:T,1}$ should be upper bounded by $\\sqrt(T)G_{\\inf}$, so the regret be $O(\\sqrt(T \\logT))$ instead of $O(\\sqrt(T))$ as stated in the remark of Corollary 1. \n\nCorrect me if I'm wrong. Thanks!", "Thanks for the inspiring paper. The observations are interesting and important!\n\nIt is easy to capture that exponential moving average might not able to capture the long-term memory of the gradients. \n\nThe paper is mainly focused on the beta2 that involving the averaging of second moment. It makes me wonder whether the beta1 on the averaging of the first moment gradient also suffer the similar problem. \n\nIt seems a direct solution would be using a large beta1 and beta2. (Always keep the maximum of the entire history seems is not the best solution and an average over a recent history might be a better alternative.) \n\nI did not carefully check the detail of the paper. But generally, one would have a similar concern I think. Could you explain the benefits of the proposed algorithm? \n\nThe synthetic experiments seem to use a relatively insufficient large beta2 regarding the large gradient gap, which makes it not able to capture the necessary long-term dependency. ", "The RMSProp used in Section 4 in Mukkamala and Hein, ICML'17 is not the standard RMSProp but a modification in which the parameter used for computing the geometrical averages of the gradient entries squared changes with time. So there is no contradiction with this paper, that shows counterexamples for the standard algorithm in which that parameter is constant. \n", "Congratulations for this paper, I really enjoyed it. It is a well written paper that contains an exhaustive set of counterexamples. I had also noticed that the proof of Adam was wrong and included it in my Master Thesis (https://damaru2.github.io/convergence_analysis_hypergradient_descent/dissertation_hypergradients.pdf Section 2.4) and I enjoyed reading through the paper and finding that indeed it was not just that the proof was wrong but that the method does not converge in general, not even in the stochastic case.\n\nI noticed some typos / minor things that seem that need to be fixed:\n\n+ In the penultimate line of page 16 there is this equality v_{t-1} = .... g_i. This g_i should be squared.\n\n+ In the following line, there is another square missing in a C, it should be (1-\\beta_{t-1}_2)(C^2 p + (1-p)) and there is a pair of parenthesis missing in the next term, it should be (1-\\beta_2^{t-1})((1+\\delta)C-\\delta)\n\n+ The fact that in Theorems 2 and 3 \\beta_2 is allowed to be 1 is confusing, since the method is not well defined if \\beta_2 is 1 (and you don't use an \\epsilon in the denominator. If you use an \\epsilon then with \\beta_1 = 0 the method is equivalent to SGD so it converges for a choice of alpha). In particular, in the proof of theorem 3 \\sqrt{1-\\beta_2} appears in some denominators and so does \\sqrt{\\beta_2} but there is no comment about what happens when this quantities are 0. There should be a quick comment on this or the \\beta_2 \\leq 1 should be removed from the theorems.\n\nBest wishes\n", "We thank you for your interest in our paper and for pointing out this missing detail. We use a decrease step size of alpha/sqrt(t) (as suggested by our theoretical analysis) for the stochastic optimization experiment. The use of decreasing step size leads to a more stable convergence to the optimal solution (especially in scenarios where the variance is reasonably high). We did not use epsilon in this particular experiment since the gradients are reasonably large (in other words, using a small epsilon like 1e-8 should produce more or less identical results). We will add these details in the next revision of our paper.", "Hello,\n\nI tried implementing AMSGrad (here: https://colab.research.google.com/notebook#fileId=1xXFAuHM2Ae-OmF5M8Cn9ypGCa_HHBgfG) for the experiment on the stochastic optimization setting and obtain that x_t approaches -1 faster that on the paper but convergence seems less stable, so I was wondering about the specific values for other hyperparameters like the learning rate and epsilon which weren't mentioned, in my case I chose a learning of 1e-3 and an epsilon of 1e-8 which seems to be the standard value on most frameworks." ]
[ 9, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "H15qgiFgf", "Sy5rDQu-z", "SJXpTMFbf", "SkjC2Ni-z", "Hyl2iJgGG", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "iclr_2018_ryQu7f-RZ", "Bye7sLhkM", "iclr_2018_ryQu7f-RZ" ]
iclr_2018_BJ8vJebC-
Synthetic and Natural Noise Both Break Neural Machine Translation
Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.
accepted-oral-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The paper is a first attempt to investigate an under-studied area in neural MT (and potentially other applications of sequence-to-sequence models as well) * This area might have a large impact; existing models such as Google Translate fail badly on the inputs described here * Experiments are very carefully designed and thorough * Experiments on not only synthetic but also natural noise add significant reliability to the results * Paper is well-written and easy to follow Cons: * There may be better architectures for this problem than the ones proposed here * Even the natural noise is not entirely natural, e.g. artificially constrained to exist within words * Paper is not a perfect fit to ICLR (although ICLR is attempting to cast a wide net, so this alone is not a critical criticism of the paper) This paper had uniformly positive reviews and has potential for large real-world impact.
train
[ "SJoXiUUNM", "SkABkz5gM", "BkQzs54VG", "BkVD7bqlf", "SkeZfu2xG", "SyTfeD5bz", "B1dT1vqWf", "HJ1vJDcZz", "HyRwAIqWf", "rJIbAd7-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your thoughtful response to my review.", "This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram character embeddings as word representations on the source side. The authors test these systems under a variety of noise conditions, including synthetic scrambling and keyboard replacements, as well as natural (human-made) errors found in other corpora and transplanted to the training and/or testing bitext via replacement tables. They show that all NMT systems, whether BPE or character-based, degrade drastically in quality in the presence of both synthetic and natural noise, and that it is possible to train a system to be resistant to these types of noise by including them in the training data. Unfortunately, they are not able to show any types of synthetic noise helping address natural noise. However, they are able to show that a system trained on a mixture of error types is able to perform adequately on all types of noise.\n\nThis is a thorough exploration of a mostly under-studied problem. The paper is well-written and easy to follow. The authors do a good job of positioning their study with respect to related work on black-box adversarial techniques, but overall, by working on the topic of noisy input data at all, they are guaranteed novelty. The inclusion of so many character-based systems is very nice, but it is the inclusion of natural sources of noise that really makes the paper work. Their transplanting of errors from other corpora is a good solution to the problem, and one likely to be built upon by others. In terms of negatives, it feels like this work is just starting to scratch the surface of noise in NMT. The proposed meanChar architecture doesn’t look like a particularly good approach to producing noise-resistant translation systems, and the alternative solution of training on data where noise has been introduced through replacement tables isn’t extremely satisfying. Furthermore, the use of these replacement tables means that even when the noise is natural, it’s still kind of artificial. Finally, this paper doesn’t seem to be a perfect fit for ICLR, as it is mostly experimental with few technical contributions that are likely to be impactful; it feels like it might be more at home and have greater impact in a *ACL conference.\n\nRegarding the artificialness of their natural noise - obviously the only solution here is to find genuinely noisy parallel data, but even granting that such a resource does not yet exist, what is described here feels unnaturally artificial. First of all, errors learned from the noisy data sources are constrained to exist within a word. This tilts the comparison in favour of architectures that retain word boundaries (such as the charCNN system here), while those systems may struggle with other sources of errors such as missing spaces between words. Second, if I understand correctly, once an error is learned from the noisy data, it is applied uniformly and consistently throughout the training and/or test data. This seems worse than estimating the frequency of the error and applying them stochastically (or trying to learn when an error is likely to occur). I feel like these issues should at least be mentioned in the paper, so it is clear to the reader that there is work left to be done in evaluating the system on truly natural noise.\n\nAlso, it is somewhat jarring that only the charCNN approach is included in the experiments with noisy training data (Table 6). I realize that this is likely due to computational or time constraints, but it is worth providing some explanation in the text for why the experiments were conducted in this manner. On a related note, the line in the abstract stating that “... a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise” implies that the other (non-charCNN) architectures could not learn these representations, when in reality, they simply weren’t given the chance.\n\nSection 7.2 on the richness of natural noise is extremely interesting, but maybe less so to an ICLR audience. From my perspective, it would be interesting to see that section expanded, or used as the basis for future work on improve architectures or training strategies.\n\nI have only one small, specific suggestion: at the end of Section 3, consider deleting the last paragraph break, so there is one paragraph for each system (charCNN currently has two paragraphs).\n\n[edited for typos]", "The CFP clearly states that \"applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field\" are relevant.", "This paper empirically investigates the performance of character-level NMT systems in the face of character-level noise, both synthesized and natural. The results are not surprising:\n\n* NMT is terrible with noise.\n\n* But it improves on each noise type when it is trained on that noise type.\n\nWhat I like about this paper is that:\n\n1) The experiments are very carefully designed and thorough.\n\n2) This problem might actually matter. Out of curiosity, I ran the example (Table 4) through Google Translate, and the result was gibberish. But as the paper shows, it’s easy to make NMT robust to this kind of noise, and Google (and other NMT providers) could do this tomorrow. So this paper could have real-world impact.\n\n3) Most importantly, it shows that NMT’s handling of natural noise does *not* improve when trained with synthetic noise; that is, the character of natural noise is very different. So solving the problem of natural noise is not so simple… it’s a *real* problem. Speculating, again: commercial MT providers have access to exactly the kind of natural spelling correction data that the researchers use in this paper, but at much larger scale. So these methods could be applied in the real world. (It would be excellent if an outcome of this paper was that commercial MT providers answered it’s call to provide more realistic noise by actually providing examples.)\n\nThere are no fancy new methods or state-of-the-art numbers in this paper. But it’s careful, curiosity-driven empirical research of the type that matters, and it should be in ICLR.", "This paper investigates the impact of noisy input on Machine Translation, and tests simple ways to make NMT models more robust.\n\nOverall the paper is a clearly written, well described report of several experiments. It shows convincingly that standard NMT models completely break down on both natural \"noise\" and various types of input perturbations. It then tests how the addition of noise in the input helps robustify the charCNN model somewhat. The extent of the experiments is quite impressive: three different NMT models are tried, and one is used in extensive experiments with various noise combinations.\n\nThis study clearly addresses an important issue in NMT and will be of interest to many in the NLP community. The outcome is not entirely surprising (noise hurts and training and the right kind of noise helps) but the impact may be. I wonder if you could put this in the context of \"training with input noise\", which has been studied in Neural Network for a while (at least since the 1990s). I.e. it could be that each type of noise has a different regularizing effect, and clarifying what these regularizers are may help understand the impact of the various types of noise. Also, the bit of analysis in Sections 6.1 and 7.1 is promising, if maybe not so conclusive yet.\n\nA few constructive criticisms:\n\nThe way noise is included in training (sec. 6.2) could be clarified (unless I missed it) e.g. are you generating a fixed \"noisy\" training set and adding that to clean data? Or introducing noise \"on-line\" as part of the training? If fixed, what sizes were tried? More information on the experimental design would help.\n\nTable 6 is highly suspect: Some numbers seem to have been copy-pasted in the wrong cells, eg. the \"Rand\" line for German, or the Swap/Mid/Rand lines for Czech. It's highly unlikely that training on noisy Swap data would yield a boost of +18 BLEU points on Czech -- or you have clearly found a magical way to improve performance.\n\nAlthough the amount of experiment is already important, it may be interesting to check whether all se2seq models react similarly to training with noise: it could be that some architecture are easier/harder to robustify in this basic way.\n\n[Response read -- thanks]\nI agree with authors that this paper is suitable for ICLR, although it will clearly be of interest to ACL/MT-minded folks.", "1. We believe that the topic on noise in NMT is of interest to the ICLR audience. Please see our response to reviewer 1 for a detailed explanation. \n\n2. We find that both solutions we offered are effective to a reasonable extent. meanChar works fairly well on scrambling types of noise, but fails on other noise, as expected. Adversarial training with noise works well as long as train/test noise types are matched, so it’s a useful practical technique that can be applied in NMT systems, as pointed out by reviewer 1. \n", "Thank you for the useful feedback. \n\n1. We agree that the topic has real-world impact for MT providers and will emphasize this in the conclusions. \n\n2. We would love to see MT providers use noisy data and we agree that the community would benefit from access to more noisy examples. \n", "Thank you for the useful feedback. We agree that noisy input in neural machine translation is an under-studied problem. \n\nResponses to specific comments:\n1. We agree that our work only starts to scratch the surface of noise in NMT and believe there’s much more to be done in this area. We do believe that it’s important to initiate a discussion of this issue in the ICLR community, for several reasons: (a) we study word and character representations for NMT, which is in line with the ICLR representation learning theme; (b) ICLR audience is very interested in neural machine translation and seminal work on NMT has been published in ICLR (e.g., Bahdanau et al.’s 2015 paper on attention in NMT); (c) ICLR audience is very interested in noise and adversarial examples, as evidenced by the plethora of recent papers on the topic. As reviewer 1 says, even though there are no fancy new methods in the paper, we believe that this kind of research belongs in ICLR.\n\n2. We agree that meanChar may not be the ideal architecture for capturing noise, but it’s a simple, structure-invariant representation that works reasonably well. We have tried several other architectures, including a self-attention mechanism, but haven’t been able to improve beyond it. We welcome more suggestions and can include those negative results in new drafts of the paper.\n\n3. Training with noise has its limitations, but it’s an effective method that can be employed by NMT providers and researchers easily and impactfully, as pointed out by reviewer 1. \n\n4. In this work, we focus on word-level noise. Certainly, sentence-level noise is also important to learn, and we’d like to see more work on this. We’ll add this as another direction for future work. Note that while charCNN may have some advantage in dealing with word-level noise, it too suffers from increasing amounts of noise, similar to the other models we studied.\n\n5. Applying noise stochastically based on frequency in available corpora is an interesting suggestion, that can be done for the natural noise, but not so clear how to apply for synthetic noise. We did experiment with increasing amounts of noise (Figure 1), but we agree there’s more to be done. We’ll add this as another future work. \n\n6. (To both reviewer 2 and 3) Regarding training other seq2seq models with noise: Our original intent was to test the robustness of pre-trained state-of-the-art models, but we also considered retraining them in this noisy paradigm. There are a number of design decisions that are involved here (e.g. should the BPE dictionary be built on the noisy texts and how should thresholds be varied?). That being said, we can investigate training using published parameter values, but worry these may be wholly inappropriate settings for the new noisy data.\n\n7. We’ll modify the abstract to not give the wrong impression regarding what other architectures can learn. \n\n8. We included section 7.2 to demonstrate why synthetic noise is not very helpful in dealing with natural noise, as well as to motivate the development of better architectures. \n\n9. We’ll correct the other small issues pointed to. \n", "Thank you for the constructive feedback. \n1. Noise setup: when training with noise, we replace the original training set with a new, noisy training set. The noisy training set has exactly the same number of sentences and words as the training set, but noise is introduced according to the description in Section 4. Therefore, we have one fixed noisy training set per each noise type. We’ll clarify the experimental design in the paper. \n\n2. We had not thought to explore the relationship between the noise we are introducing as a corruption of the input and the training under noise paradigm you referenced. We might be mistaken, but normally, the corruption (e.g. Bishop 95) is in the form of small additive gaussian noise. It isn’t immediately clear to us whether discrete perturbation of the input like we have here is equivalent, but would love suggestions on analyses we might do to investigate this insight further.\n\n3. Some cells in the mentioned rows in Table 6 were indeed copied from the French rows by error. We corrected the numbers and they are in line with the overall trends. Thank you for pointing this out. The corrected Czech numbers are in the 20s and the best performing system is the Rand+Key+Real setting.\n\n4. (To both reviewer 2 and 3) Regarding training other seq2seq models with noise: Our original intent was to test the robustness of pre-trained state-of-the-art models, but we also considered retraining them in this noisy paradigm. There are a number of design decisions that are involved here (e.g. should the BPE dictionary be built on the noisy texts and how should thresholds be varied?). That being said, we can investigate training using published parameter values, but worry these may be wholly inappropriate settings for the new noisy data.", "The paper points out the lack of robustness of character based models and explores a few, very basic solutions, none of which are effective. While starting a discussion around this problem is valuable, the paper provides no actually working solutions, and the solutions explored are very basic from a machine learning point of view. This publication is better suited to a traditional NLP venue such as ACL/EMNLP." ]
[ -1, 7, -1, 7, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "HJ1vJDcZz", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "iclr_2018_BJ8vJebC-", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "BkVD7bqlf", "SkABkz5gM", "SkeZfu2xG", "iclr_2018_BJ8vJebC-" ]
iclr_2018_Hk2aImxAb
Multi-Scale Dense Networks for Resource Efficient Image Classification
In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.
accepted-oral-papers
As stated by reviewer 3 "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer." As stated by reviewer 2 "My only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). ". The authors assert novelty in the sense that they provide a solution to improve computational efficiency and focus on this aspect of the problem. Overall, the technical innovation is not huge, but I think this could be a very useful idea in practice.
train
[ "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz", "Hy_75oomz", "HkJRFjomf", "HJiXYjjQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This work proposes a variation of the DenseNet architecture that can cope with computational resource limits at test time. The paper is very well written, experiments are clearly presented and convincing and, most importantly, the research question is exciting (and often overlooked). \n\nMy only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). The authors add a hierarchical, multi-scale structure and show that DenseNet can better cope with it than ResNet (e.g., Fig. 3). They investigate pros and cons in detail adding more valuable analysis in the appendix. However, this work is basically an extension of the DenseNet approach with a new problem statement and additional, in-depth analysis. \n\nSome more minor comments: \n\n-\tPlease enlarge Fig. 4. \n-\tI did not fully grasp the details in the first \"Solution\" paragraph on P5. Please extend and describe in more detail. \n\nIn conclusion, this is a very well written paper that designs the network architecture (of DenseNet) such that it is optimized to include CPU budgets at test time. I recommend acceptance to ICLR18.\n \n\n\n", "This paper presents a method for image classification given test-time computational budgeting constraints. Two problems are considered: \"any-time\" classification, in which there is a time constraint to evaluate a single example, and batched budgets, in which there is a fixed budget available to classify a large batch of images. A convolutional neural network structure with a diagonal propagation layout over depth and scale is used, so that each activation map is constructed using dense connections from both same and finer scale features. In this way, coarse-scale maps are constructed quickly, then continuously updated with feed-forward propagation from lower layers and finer scales, so they can be used for image classification at any intermediate stage. Evaluations are performed on ImageNet and CIFAR-100.\n\nI would have liked to see the MC baselines also evaluated on ImageNet --- I'm not sure why they aren't there as well? Also on p.6 I'm not entirely clear on how the \"network reduction\" is performed --- it looks like finer scales are progressively dropped in successive blocks, but I don't think they exactly correspond to those that would be needed to evaluate the full model (this is \"lazy evaluation\"). A picture would help here, showing where the depth-layers are divided between blocks.\n\nI was also initially a bit unclear on how the procedure described for batched budgeted evaluation achieves the desired result: It seems this relies on having a batch that is both large and varied, so that its evaluation time will converge towards the expectation. So this isn't really a hard constraint (just an expected result for batches that are large and varied enough). This is fine, but could perhaps be pointed out if that is indeed the case.\n\nOverall, this seems like a natural and effective approach, and achieves good results.\n", "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer. The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image. The multi-scale representation allows for better performance at early stages of the network. Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers.\nA thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation.\n\nPros:\n- The presentation is clear and easy to follow.\n- The structure of the network is clearly justified in section 4.\n- The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting.\n- The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios.\n- Results are very promising, with 5x speed-ups and same or better accuracy that previous models.\n- The extensive experimentation shows that the proposed network is better than previous approaches under different regimes.\n\nCons:\n- Results about the more efficient densenet* could be shown in the main paper\n\nAdditional Comments:\n- Why in training you used logistic loss instead of the more common cross-entropy loss? Has this any connection with the final performance of the network?\n- In fig. 5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT\n- In fig. 5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results\n- How the budget in terms of Mul-Adds is actually estimated?\n\nI think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them. The experimental evaluation is complete and accurate. \n\n", "Thanks for positive comments. \n\n# difference to DenseNet\nAlthough dense connectivity is one of the two key components in our MSDNet, this paper is quite different from the original DenseNet paper: (1) in this paper we tackle a very different problem, the inference of deep models with computational resource limits at test time; (2) we show the multi-scale features are crucial for learning accurate early classifiers. Finally, MSDNet yields 2x to 5x faster inference speed than DenseNet under the batch budgeted setting.\n\n# minors\nThanks for these suggestions. We have incorporated them in the updated version.", "Thanks for the positive comments.\n\n# MC baselines on ImageNet\nWe exclude these results in our current version as we observed that they are far from competitive on both CIFAR-10 and CIFAR-100. We are testing the MC baselines on ImageNet, and will include it in a later version, but won’t expect them to be strong baselines.\n\n# network reduction\nThe ‘network reduction’ is a design choice to reduce redundancy in the network, while ‘lazy evaluation’ is a strategy to avoid redundant computations. We have added a figure (Figure 9) in the appendix to illustrate the reduced network as suggested. \n\n# batched budgeted evaluation\nThanks for pointing out. We have emphasize that the notion of budget in this context is a “soft constraint” given a large batch of testing samples.", "Thank you for the encouraging comments! \n\n# DenseNet*\nWe have included the DenseNet* results in the main paper as suggested. We placed this network originally in the appendix to keep the focus of the main manuscript on the MSDNet architecture, and it was introduced for the first time in this paper (although as a competitive baseline).\n\n# logistic loss\nWe actually used the cross entropy loss in our experiments. We have fixed this sentence. Thanks for pointing out.\n\n# DenseNet^MC and ResNet^MC on ImageNet (left panel of Fig.5)\nWe observed that DenseNet^MC and ResNet^MC are two of the weakest baselines on both CIFAR-10 and CIFAR-100 datasets. Therefore, we thought their results on ImageNet probably won’t add much to the paper. We can add these results in a later version.\n\n# improvements in the anytime setting\nIt should be 4% and 8% higher accuracy when the budget ranges from 0.1x10^10* to 0.3x10^10* FLOPs. We have corrected it in the updated version.\n\n# actually budget\nFor many devices, e.g., ARM processor, the actual inference time is basically a linear function of the number of Mul-Add operations. Thus in practice, given a specific device, we can estimate the budget in terms of Mul-Add according to the real time budget." ]
[ 8, 7, 10, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz" ]
iclr_2018_HJGXzmspb
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``"WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
accepted-oral-papers
High quality paper, appreciated by reviewers, likely to be of substantial interest to the community. It's worth an oral to facilitate a group discussion.
train
[ "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM", "HJ7oecRZf", "r1t-e5CZf", "ryW51cAbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a method to train neural networks with low precision. However, it is not clear if this work obtains significant improvements over previous works. \n\nNote that:\n1)\tWorking with 16bit, one can train neural networks with little to no reduction in performance. For example, on ImageNet with AlexNet one gets 45.11% top-1 error if we don’t do anything else, and 42.34% (similar to the 32-bit result) if we additionally adjust the loss scale (e.g., see Boris Ginsburg, Sergei Nikolaev, and Paulius Micikevicius. “Training of deep networks with halfprecision float.” NVidia GPU Technology Conference, 2017). \n2)\tImageNet with AlexNet top-1 error (53.5%) in this paper seems rather high in comparison to previous works. Specifically, DoReFA and QNN, which used mostly lower precision (k_W=1, k_A=2 and k_E=6, k_G=32) one can get much lower performance (47% and 49%, respectively). So, the main innovation here, in comparison, is k_G=12.\n3)\tComparison using other datasets is made with different architectures then previous works, so it is hard to quantify what is the contribution of the proposed method. For example, on MNIST, the authors use a convolutional neural network, while BC and BNN used a fully connected neural network (the so called “permutation invariant mnist” problem).\n4)\tCifar performance is good, but may seem less remarkable, given that “Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework” already showed that k_G=k_W=k_A=2, k_E=32 is sufficient to get 7.5% error on CIFAR. So the main novelty, in comparison, is that k_E=12.\n\nTaking all the above into account, it hard to be sure whether the proposed methods meaningfully improve existing methods. Moreover, I am not sure if decreasing the precision from 16bit to 12bit (as was done on ImageNet) is very useful for hardware applications, especially if there is such a degradation in accuracy. If, for example, the authors would have demonstrated all-8bit training on all datasets with little performance degradation, this would seem much more useful.\n\nMinor: there are some typos that should be corrected, e.g.: “Empirically, We demonstrates” in abstract.\n\n%%% Following the authors response %%%\nThe authors have improved their results and have addressed my concerns. I therefore raised my scores.\n\n", "The authors describe a method called WAGE, which quantize all operands and operators in a neural network, specifically, the weights (W), activations (A), gradients (G), and errors (E) . The idea is using quantizers with clipping (denoted in the paper with Q(x,k)) and some additional operators like shift (denoted with shift(x)) and stochastic rounding. The main motivation of the authors in this work is to reduce the number of bits for representation in a network for all the WAGE operations and operands which influences the power consumption and silicon area in hardware implementations.\n\nAfter introducing the idea and related work, the authors in Section 3 give details about how to perform the quantization. They introduce the additional operators needed for training in such network. Since quantization may loss some information, the authors need to quantize the signals in the network around the dynamic range in order not to \"kill\" the signal. The authors describe how to do that. Afterward, as in other techniques for quantization, they describe how to initialize the network values. Also, they argue that batch normalization in this network is replaced with the shift-quantize operations, and what is matter in this case is (1) the relative values (“orientations”) and not the absolute values and (2) small values in errors are negligible.\n\nAfterward, the authors conduct experiments on MNIST, SVHN, CIFAR10, and ILSVRC12 datasets, where they show promising results compared to the errors provided by previous works. The WAGE parameters (i.e., the quantized no. of bits used) are 2-8-8-8, respectively. For understand more the WAGE, the authors compare on CIFAR10 the test error rate with vanilla CNN and show is small loss in using their network. The authors investigate mainly the bitwidth of errors and gradients.\n\nIn overall, this paper is an accept since it shows good performance on standard problems and invent some nice tricks to implement NN in hardware, for *both* training and inference. For inference only, other works has more to offer but this is a promising technique for learning. The things that are still missing in this work are some power reduction estimates as well as area reduction estimations. This will give the hardware community a clear vision of how such methods may be implemented both in data centers as well as on end portable devices. \n", "The authors propose WAGE, which discretized weights, activations, gradients, and errors at both training and testing time. By quantization and shifting, SGD training without momentum, and removing the softmax at output layer as well, the model managed to remove all cumbersome computations from every aspect of the model, thus eliminating the need for a floating point unit completely. Moreover, by keeping up to 8-bit accuracy, the model performs even better than previously proposed models. I am eager to see a hardware realization for this method because of its promising results. \n\nThe model makes a unified discretization scheme for 4 different kinds of components, and the accuracy for each of the kind becomes independently adjustable. This makes the method quite flexible and has the potential to extend to more complicated networks, such as attention or memory. \n\nOne caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet. Given the number of bits each of the WAGE components asked for, a 28.5% top 5 error rate seems even lower than XNOR. I suspect it is due to the fact that gradients and errors need higher accuracy for real-valued input, but if that is the case, accuracies on SVHN and CIFAR-10 should also reflect that. Or, maybe it is due to hyperparameter setting or insufficient training time?\n\nAlso, dropout seems not conflicting with the discretization. If there are no other reasons, it would make sense to preserve the dropout in the network as well.\n\nIn general, the paper was written in good quality and in detail, I would recommend a clear accept.\n", "We sincerely appreciate the reviewer for the comments, which indeed helps us to improve the quality of this paper. \n\nIn our revised manuscript, we keep the last layer in full precision for ImageNet task (both BNN and DoReFa keep the first and the last layer in full precision). Our results have been improved from 53.5/28.6 with 28CC to 51.7/28.0 with 2888 bits setting. Results of other patterns are updated in Table4. We have now revised the paper accordingly and would like to provide point-by-point response on how these comments have been addressed:\n\n(1) Working with 16bit, one can train neural networks with little to no reduction in performance.\n\nWe introduce a thorough and flexible approach (from AnonReviewer3) towards training DNNs with fixed-point (8bit) integers, so there is no floating-point operands or operations in both inference and training phases. This is the key difference between our work and the previous works. As shown in Table5 in the revised manuscript, 5x reduction of energy and area costs can be achieved in this way, which we believe will greatly benefit the application of our method especially in mobile devices.\n\n(2) ImageNet with AlexNet top-1 error (53.5%) in this paper seems rather high in comparison to previous works.\n\nThe significant differences between WAGE and existing works (DoReFa, QNN, BNN) lie in that:\n\n 1. WAGE does not need to store real-valued weights (DoReFa, QNN, BNN need).\n 2. WAGE calculates both gradients and errors with 8-bit integers (QNN, BNN use float32).\n 3. Many of the techniques, say for example, batch normalization and Adam optimizer that are hard to be \n implemented on mobile devices are avoided by WAGE. \n\nThrough experiments, we find that, if we store real-valued weights and do not quantize back propagation, the performance on ImageNet is at the same level (although not the same specification) as that of DoReFa, QNN and BNN. Please refer to more detailed results in Table4.\n\n(3) Comparison using other datasets is made with different architectures then previous works\n\nPlease refer to the comparison between TWN and WAGE in Table1 where we show a better result with the same CNN architecture. \n\n(4) Cifar performance is good, but may seem less remarkable.\n\nIn fact, k-E is set as 8 in WAGE. Gated-XNOR uses a batch size of 1000 and totally trains for 1000 epochs, so the total training time and memory consumption are unsatisfactory. Besides, they use float32 to calculate gradients and errors, and batch normalization layer is kept to guarantee the convergence.\n\n(5) If, for example, the authors would have demonstrated all-8bit training on all datasets\n\nIn our experiments, we find that it is necessary to set k-G>k-W, otherwise the updates of weights will directly influence the forward propagation and cause instability. Most of the previous works store real-valued weights (32-bits k-G), so they meet this restriction automatically. By considering this comment, we focus on 2-8-8-8 training and the results for ImageNet are updated in Table1 and Table4. \n", "We thank the reviewer for the constructive suggestion:\n\n(1) The things that are still missing in this work are some power reduction estimates as well as area reduction estimations.\n\nWe have taken this suggestion and added Table5 in Discussion, and made a rough estimation. \n\nFor future work, we have tapped out our neuromorphic processors lately using phase-change memory to store weights and designed the ability to do some on-chip and on-site learning. The processor has 8-bit weights and 8-bit activation without any floating-point design. The real power consumption and area reduction of the processor has been simulated and estimated. It is very promising to implement some interesting application with continual learning demands on that chip as an end portable device.\n", "We thank the reviewer for the insightful comments. Please find our responses to individual questions below:\n\n(1) One caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet ...\n\nIn our revised manuscript, we keep the last layer in full precision for ImageNet task (BNN and DoReFa kept both the first and the last layer), the accuracy for 2-8-8-8 is 51.7/28.0 compared to original results 53.5/28.6 with 2-8-C-C bits setting. Results of other patterns are updated in Table4.\n\nWe find that the Softmax layer in the AlexNet model and 1000 categories jointly cause the conflictions. Since we make no exception for the first or the last layer, weights in the last layer will be limited to {-0.5,0,+0.5} and scaled by Equation(8), so the outputs of the last layer also obey a normal distribution N(0,1). The problem is that these values are small for a Softmax layer with 1000 categories. \n\nExample: \n\nx1=[0,0,0,…,1] (one-hot 1000 dims)\ny1=Softmax(x1)=[9.9e-4, 9.9e-4, …, 2.7e-3]\ne1 = z – x1, still a long way to train\nx2=[0, 0, 0,…,8] (one-hot 1000 dims)\ny2=Softmax(x2)=[1e-4, 1e-4, …, 0.75]\ne2 = z – x2, much closer to the label now\nlabel=z=[0,0,0,…,1].\n\nIn this case, we observe that 80% weights in the last FC layer are trained greedily to {+0.5} to magnify the outputs. Therefore, the last layer would be a bottleneck for both inference and backpropagation. That might be why previous works do not quantize the last layer. The experiments on CIFAR10 and SVHN did not use Softmax cross-entropy and had only 10 categories, which indicates no accuracy drop. \n\n\n(2)Also, dropout seems not conflicting with the discretization...\n\nYes, it is an additional method to alleviate over-fitting. Because we are working on designing a new neuromorphic computing chip, dropout will make the pipeline of weights and MAC calculations a little bit weird. Anyone who has no concern of that can easily add dropout to the WAGE graph.\n\n\t\n" ]
[ 7, 7, 8, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM" ]
iclr_2018_HJGv1Z-AW
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
accepted-oral-papers
Important problem (analyzing the properties of emergent languages in multi-agent reference games), a number of interesting analyses (both with symbolic and pixel inputs), reaching a finding that varying the environment and restrictions on language result in variations in the learned communication protocols (which in hindsight is that not surprising, but that's hindsight). While the pixel experiments are not done with real images, it's an interesting addition the literature nonetheless.
train
[ "HJ3-u2Ogf", "H15X_V8yM", "BytyNwclz", "S1XPn0jXG", "r1QdpPjXf", "SJWDw1iXG", "ryjhESdQG", "S1GjVrOmz", "rJylbvSzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "--------------\nSummary:\n--------------\nThis paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide rich analysis of the emergent languages the agents produce under different experimental conditions. This analysis (especially on raw pixel images) make up the primary contribution of this work.\n\n\n--------------\nEvaluation:\n--------------\nOverall I think the paper makes some interesting contributions with respect to the line of recent 'language emergence' papers. The authors provide novel analysis of the learned languages and perceptual system across a number of environmental settings, coming to the (perhaps uncontroversial) finding that varying the environment and restrictions on language result in variations in the learned communication protocols. \n\nIn the context of existing literature, the novelty of this work is somewhat limited -- consisting primarily of the extension of multi-agent reference games to raw-pixel inputs. While this is a non-trivial extension, other works have demonstrated language learning in similar referring-expression contexts (essentially modeling only the listener model [Hermann et.al 2017]). \n\nI have a number of requests for clarification in the weaknesses section which I think would improve my understanding of this work and result in a stronger submission if included by the authors. \n\n--------------\nStrengths:\n--------------\n- Clear writing and document structure. \n\n\n- Extensive experimental setting tweaks which ablate the information and regularity available to the agents. The discussion of the resulting languages is appropriate and provides some interesting insights.\n\n\n- A number of novel analyses are presented to evaluate the learned languages and perceptual systems. \n\n\n--------------\nWeaknesses:\n--------------\n- How stable are the reported trends / languages across multiple runs within the same experimental setting? The variance of REINFORCE policy gradients (especially without a baseline) plus the general stochasticity of SGD on randomly initialized networks leads me to believe that multiple training runs of these agents might result is significantly different codes / performance. I am interested in hearing the author's experiences in this regard and if multiple runs present similar quantitative and qualitative results. I admit that expecting identical codes is unrealistic, but the form of the codes (i.e. primarily encoding position) might be consistent even if the individual mappings are not).\n\n\n- I don't recall seeing descriptions of the inference-time procedure used to evaluate training / test accuracy. I will assume argmax decoding for both speaker and listener. Please clarify or let me know if I missed something.\n\n\n- There is ambiguity in how the \"protocol size\" metric is computed. In Table 1, it is defined as 'the effective number of unique message used'. This comes back to my question about decoding I suppose, but does this count the 'inference-time' messages or those produced during training? \nFurthermore, Table 2 redefines \"protocol size\" as the percentage of novel message. I assume this is an editing error given the values presented and take these columns as counts. It also seems \"protocol size\" is replaced with the term \"lexicon\" from 4.1 onward.\n\n- I'm surprised by how well the agents generalize in the raw pixel data experiments. In fact, it seems that across all games the test accuracy remains very close to the train accuracy. \n\nGiven the dataset is created by taking all combinations of color / shape and then sampling 100 location / floor color variations, it is unlikely that a shape / color combo has not been seen in training. Such that the only novel variations are likely location and floor color. However, taking Game A as an example, the probe classifiers are relatively poor at these attributes -- indicating the speaker's representation is not capturing these attributes well. Then how do the agents effectively differentiate so well between 20 images leveraging primarily color and shape?\n\nI think some additional analysis of this setting might shed some light on this issue. One thought is to compute upper-bounds based on ground truth attributes. Consider a model which knows shape perfectly, but cannot predict other attributes beyond chance. To compute the performance of such a model, you could take the candidate set, remove any instances not matching the ground truth shape, and then pick randomly from the remaining instances. Something similar could be repeated for all attributes independently as well as their combinations -- obviously culminating in 100% accuracy given all 4. It could be that by dataset construction, object location and shape are sufficient to achieve high accuracy because the odds of seeing the same shape at the same location (but different color) is very low. \n\nGiven these are operations on annotations and don't require time-consuming model training, I hope to see this analysis in the rebuttal to put the results into appropriate context.\n\n\n- What is random chance for the position and floor color probe classifiers? I don't think it is mentioned how many locations / floor colors are used in generation. \n\n\n- Relatively minor complaint: Both agents are trained via the REINFORCE policy gradient update rule; however, the listener agent makes a fairly standard classification decision and could be trained with a standard cross-entropy loss. That is to say, the listener policy need not make intermediate discrete policy decisions. This decision to withhold available supervision is not discussed in the paper (as far as I noticed), could the authors speak to this point?\n\n\n\n--------------\nCuriosities:\n--------------\n- I got the impression from the results (specifically the lack of discussion about message length) that in these experiments agents always issued full length messages even though they did not need to do so. If true, could the authors give some intuition as to why? If untrue, what sort of distribution of lengths do you observe?\n\n- There is no long term planning involved in this problem, so why use reinforcement learning over some sort of differentiable sampler? With some re-parameterization (i.e. Gumbel-Softmax), this model could be end-to-end differentiable.\n\n\n--------------\nMinor errors:\n--------------\n[2.2 paragraph 1] LSTM citation should not be in inline form.\n[3 paragraph 1] 'Note that these representations do care some' -> carry\n[3.3.1 last paragraph] 'still able comprehend' --> to\n\n\n-------\nEdit\n-------\nUpdating rating from 6 to 7.", "This paper presents a set of studies on emergent communication protocols in referential games that use either symbolic object representations or pixel-level representations of generated images as input. The work is extremely creative and packed with interesting experiments.\n\nI have three main comments.\n\n* CLARITY OF EXPOSITION\n\nThe paper was rather hard to read. I'll provide some suggestions for improvement in the minor-comments section below, but one thing that could help a lot is to establish terminology at the beginning, and be consistent with it throughout the paper: what is a word, a message, a protocol, a vocabulary, a lexicon? etc.\n\n* RELATION BETWEEN VOCABULARY SIZE AND PROTOCOL SIZE\n\nIn the compositional setup considered by the authors, agents can choose how many basic symbols to use and the length of the \"words\" they will form with these symbols. There is virtually no discussion of this interesting interplay in the paper. Also, there is no information about the length distribution of words (in basic symbols), and no discussion of whether the latter was meaningful in any way.\n\n* RELATION BETWEEN CONCEPT-PROPERTY AND RAW-PIXEL STUDIES\n\nThe two studies rely on different analyses, and it is difficult to compare them. I realize that it would be impossible to report perfectly comparable analyses, but the authors could at least apply the \"topographic\" analysis of compositionality in the raw-pixel study as well, either by correlating the CNN-based representational similarities of the Speaker with its message similarities, or computing similarity of the inputs in discretized, symbolic terms (or both?).\n\n* MINOR/DETAILED COMMENTS\n\nSection 1\n\nHow do you think emergent communication experiments can shed light on language acquisition?\n\nSection 2\n\nIn figure 1, the two agents point at nothing.\n\n\\mathbf{v} is a set, but it's denoted as a vector. Right below that, h^S is probably h^L?\n\nall candidates c \\in C: or rather their representations \\mathbf{v}?\n\nGive intuition for the reward function.\n\nSection 3\n\nWe use the dataset of Visual Attributes...: drop \"dataset\"\n\nI think the pre-linguistic objects are not represented by 1-hot, but binary vectors.\n\ndo care some inherent structure: carry\n\nNote that symbols in V have no pre-defined semantics...: This is repeated multiple times.\n\nSection 3\n\nI couldn't find simulation details: how many training elements, and how is training accuracy computed? Also, \"training data\", \"training accuracy\" are probably misleading terms, as I suppose you measured performance on new combinations of objects.\n\nI find \"Protocol Size\" to be a rather counterintuitive term: maybe call Vocabulary Size \"Alphabet Size\", and Protocol Size \"Lexicon Size\"?\n\nState in Table 1 caption that the topographic measure will be explained in a later section. Also, the -1 is confusing: you can briefly mention when you introduce the measure that since you correlate a distance with a similarity you expect an inverse relation? Also, you mention in the caption that all Spearman rhos are significant, but where are they presented again?\n\nSection 3.2\n\nDoes the paragraph starting with \"Note that the distractor\" refer to a figure or table that is not there? If not, it should be there, since it's not clear what are the data that support your claims there. Also, you should explain what the degenerate strategy the agents find is.\n\nNext paragraph:\n\n- I find the usage of \"obtaining\" to refer to the relation between messages and objects strange.\n\n- in which space are the reported pairwise similarities computed?\n\n- make clear that in the non-uniform case confusability is less influenced by similarity since the agents must learn to distinguish between similar objects that naturally co-occur (sheep and goats)\n\n- what is the expected effect on the naturalness of the emerged language?\n\nSection 3.3\n\nadhere to, the ability to: \"such as\" missing?\n\nIs the unigram chimera distribution inferred from the statistics over the distribution of properties across all concepts or what? (please clarify.)\n\nIn Tables 2 and 3, why is vocabulary size missing?\n\nIn Table 2, say that the protocol size columns report novel message percentage **for the \"test\" conditions***\n\nFigure 2: spelling of Levensthein\n\nSection 3.3.2\n\nwhile for languages (c,d)... something missing.\n\nwith a randomly initialized...: no a\n\nMore importantly, I don't understand this \"random\" setup: if architecture was fixed and randomly initialized, how could something be learned about the structure of the data?\n\nSection 4\n\nRefer to the images the agents must communicate about as \"scenes\", since objects are just a component of them.\n\nWhat are the absolute sizes of train and test splits?\n\nSection 4.1\n\nwe do not address this issue: the issue\n\nSection 4.2\n\nat least in the game C&D: games\n\nWhy is Appendix A containing information that logically follows that in Appendix B?\n", "This paper presents an analysis of the communication systems that arose when neural network based agents played simple referential games. The set up is that a speaker and a listener engage in a game where both can see a set of possible referents (either represented symbolically in terms of features, or represented as simple images) and the speaker produces a message consisting of a sequence of numbers while the listener has to make the choice of which referent the speaker intends. This is a set up that has been used in a large amount of previous work, and the authors summarize some of this work. The main novelty in this paper is the choice of models to be used by speaker and listener, which are based on LSTMs and convolutional neural networks. The results show that the agents generate effective communication systems, and some analysis is given of the extent to which these communications systems develop compositional properties – a question that is currently being explored in the literature on language creation.\n\nThis is an interesting question, and it is nice to see worker playing modern neural network models to his question and exploring the properties of the solutions of the phone. However, there are also a number of issues with the work.\n\n1. One of the key question is the extent to which the constructed communication systems demonstrate compositionality. The authors note that there is not a good quantitative measure of this. However, this is been the topic of much research of the literature and language evolution. This work has resulted in some measures that could be applied here, see for example Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf\n\n2. In general the results occurred be more quantitative. In section 3.3.2 it would be nice to see statistical tests used to evaluate the claims. Minimally I think it is necessary to calculate a null distribution for the statistics that are reported.\n\n3. As noted above the main novelty of this work is the use of contemporary network models. One of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.\n\nOverall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here.", "We would like to thank all the reviewers for their thoughtful and detailed feedback. We particularly thank them for recognizing that this is an interesting piece of work.\n\nWe have now revised our manuscript to address the concerns raised by the reviewers, hopefully producing a stronger and clearer submission. The most significant changes are:\n\n\t(as asked by AnonReviewer3)\n* We have added statistical tests (permutation test) to support claims regarding the results of the topographic similarity\n* We have added 2 sentences in the abstract and conclusion to make clear our contributions on extending work in the language evolution literature to contemporary DL materials.\n\n\t(as asked by AnonReviewer2)\n* We have added in the Appendix C a new experiment on communicative success on the raw pixel data with models operating on gold attribute classifiers\n* We have added a comment about instability of REINFORCE affecting nature of protocols in same of the experimental setups of Section 4\n\n\t(as asked by AnonReviewer1)\n* We made all the requested clarifications (thanks again for the detailed review)\n* Added Figure 2 to visually illustrate the claims in Section 3.2\n* Added topographic similarity measurements for Section 4 (Table 3) which strengthen the findings of the qualitative analysis of game A producing structurally consistent messages.\n", "Thanks for the clarifications, and looking forward to the revised paper.", "We would like to thank the reviewer for their review. We found their comments extremely helpful and we are in the process of updating the manuscript accordingly. We will upload the revised paper tomorrow. In the meantime, we respond here to the major comments.\n\n\n<review>\n* CLARITY OF EXPOSITION\n</review>\nWe will introduce the terminology together with the description of the game.\n\n<review>\n* RELATION BETWEEN VOCABULARY SIZE AND PROTOCOL SIZE\n</review>\nWithout any explicit penalty on the length of the messages (Section 2), agents are not motivated to produce shorter messages (despite the fact that as the reviewer points, agents can decide to do so) since this constrains the space of messages (and thus the possibility of the speaker and listener agreeing on a successful naming convention), opting thus to always make use of the maximum possible length. When we introduced a penalty on the length of the message (Section 3), agents produced shorter messages for the ambiguous messages since this strategy maximizes the total expected reward.\n\n\n<review>\n* RELATION BETWEEN CONCEPT-PROPERTY AND RAW-PIXEL STUDIES\n</review>\nThanks for the suggestion. Correlating CNN-based representations with message similarities would not yield any new insight since these representations are the input to the message generation process. However, we ran the analysis on the symbolic representations of the images (location cluster, color, shape, floor color cluster) and the messages and found that the topographic similarities of the games are ordered as follows (in parentheses we report the topographic $\\rho$): game A (0.13) > game C (0.07) > game D (0.06) > game B (0.006).\nThis ordering is in line with our qualitative analysis of the protocols presented in Section 4.1.\n\n<review>\nFigures/Tables for \"Note that the distractor\"paragraph and degenerate strategy.\n</review>\nWe will include in the manuscript the training curves that this paragraph refers to.\nThe degenerate strategy is that of picking a target at random from the topically relevant set of distractors, thus reducing the effective size of distractors.\n\n<review>\n\"random\" setup...\n</review>\nDespite the fact that the weights of the networks are random, since the message generation is a parametric process, similar inputs will tend to generate similar outputs, thus producing messages that retain (at least to some small degree) the structure of the input data, despite the fact that there is no learning at all.\n", "\n<review>\nrandom chance of probe classifiers.\n</review>\nWhen generating the dataset, we sample locations and floor colors from a continuous scale. For the probe classifiers, we quantize location by clustering each coordinate in 5 clusters (and thus accuracy is reported by averaging the performance of the x and y probe classifiers with chance being at 20% for each co-ordinate) and floor colors in 3 clusters (with chance being at 33%). We will include the chance levels in Table 4.\n\n<review>\nWhy not use cross-entropy loss for listener?\n</review>\nWe decided to train both agents via REINFORCE for symmetry. Given the nature of the listener’s choice, we don’t anticipate full supervision to have an effect other than speeding up learning.\n\n\n<review>\nWhat about message length?\n</review>\nWithout any explicit penalty on the length of the messages (Section 2), agents are not motivated to produce shorter messages (despite the fact that as the reviewer points, agents can decide to do so) since this constrains the space of messages (and thus the possibility of the speaker and listener agreeing on a successful naming convention). When we introduced a penalty on the length of the message (Section 3), agents produced shorter messages for the ambiguous messages (since this strategy maximizes the total expected reward).\n\n<review>\nWhy use reinforcement learning over some sort of differentiable sampler?\n</review>\nWhile a differentiable communication channel would make learning faster, it goes against the basic and fundamental principles of human communication (and also against how this phenomenon is studied in language evolution). Simply put, having a differentiable channel would mean in practice that speakers can back-propagate through listeners’ brains (which unfortunately is not the case in real life :)) We wanted to stay as close as possible to this communication paradigm, thus using a discrete communication channel.", "We thank the reviewer for their thorough review. We respond to the comments raised while we are in the process of making the necessary changes in the manuscript.\n\n<review>\nHow stable are results?\n</review>\nOverall, results with REINFORCE in these non-stationary multi-agent environments (where speakers and listeners are learning at the same time) show instability, and -- as expected -- some of the experimental runs did not converge. However, we believe that the stability of the nature of the protocol (rather than its existence) is mostly influenced by the configuration of the game itself, i.e., how constrained the message space is. As an example, games C & D impose constraints on the nature of the protocol since location encoding location on the messages is not an acceptable solution -- on runs that we had convergence, the protocols would always communicate about color. The same holds for game A (position is a very good strategy since it uniquely identifies objects combined with the environmental pressure of many distractors). However, game B is more unconstrained in nature and the converged protocols were more varied. We will include a discussion of these observations in the updated manuscript.\n\n<review>\nInference time procedure\n</review>\nThe reviewer is correct. At training time we sample, at test time we argmax. We will clarify this.\n\n<review>\nProtocol size vs lexicon\n</review>\nThank you for pointing this out. We will clarify the terminology.\nProtocol size (or lexicon -- we will remove this term and use protocol size only) is the number of invented messages (sequences of symbols).\nIn Table 1, we report the protocol size obtained with argmax on training data.\nIn Table 2, we report the number of novel messages, i.e., messages that were not generated for the training data, on 100 novel objects.\n\n<review>\nGeneralization on raw pixel data -- training and test accuracy are close\n</review>\nThis observation is correct. By randomly creating train and test splits, chances are that the test data contain objects of seen color and shape combination but unseen location. Neural networks (and any other parametric model) do better in these type of “smooth” generalizations caused by a continuous property like location.\n\n<review>\nHowever, taking Game A as an example, the probe classifiers are relatively poor at these attributes -- indicating the speaker's representation is not capturing these attributes well. \nThen how do the agents effectively differentiate so well between 20 images leveraging primarily color and shape?\n</review>\nIn Game A, agents differentiate 20 objects leveraging primarily object position rather than color and shape.\nIn Game A, the listener needs to differentiate between 20 objects, and so, communicating about color and shape is not a good strategy as there are chances that there will be some other red cube, for example, on the distractor list. The probe classifiers are performing relatively poorly on these attributes (especially on the object color) whereas they perform very well on position (which is in fact a good strategy), which as we find by our analysis is what the protocol describes. We note that location is a continuous variable (which we discretize only for performing the probe analysis in Section 4.2) and so it is very unlikely that two objects have the same location, thus uniquely identifying objects among distractors. This is not the case for games C & D since the listener sees a variation of the speaker’s target.\nMoreover, we note, that object location is encoded across all games.\n\n<review>\nUpper-bound analysis based on ground truth attributes.\n</review>\nWe agree with the reviewer that an upper-bound analysis relying on gold information of objects will facilitate the exposition of results. Note that since location is a continuous variable, ground truth of location is not relevant.\n\t\tcolor \tshape color & shape\nA\t\t0.37\t 0.24\t0.80\nB & C \t0.93\t 0.90\t0.98\nD\t\t0.89\t 0.89\t0.98\n\nWe could perform the same analysis by discretizing the location in the same way we performed the probe analysis in Section 4.2, however, the upper-bound results depend on the number of discrete locations we derive.\n\t\tlocation\t color & location\tshape & location\nA\t\t0.69\t\t 0.95\t\t\t0.92\nB \t\t0.97\t\t 0.99\t\t\t0.99\n(for C and D results for location are not applicable)\n\n\n\n", "We thank the reviewer for their comments.\nFor replying, we copy-paste the relevant part and comment on it.\n\n<review> 1. One of the key question ... Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf\" \n</review>\n\nWe agree with the reviewer that there are good existing measures. Our point was only that there is no mathematical definition and hence no definitive measure. In fact, we do include such a measure found in the literature on language evolution. Our topographic similarity measure (which is introduced by Brighton & Kirby (2006)) is in line with the measure introduced in 2.2.3 in Carr et al.. In Carr et al, the authors correlate Levenshtein message distances and triangle dissimilarities (as obtained from humans). In our study, we correlate Levenshtein message distances and object dissimilarities as obtained by measuring cosine distance of the object feature norms (which are produced by humans). We will make sure to make this connection to previous literature explicit in our description of the measure.\n\n<review>\n2. In general the results occurred be more quantitative....statistics that are reported.\n</review>\n\nWe agree with the reviewer that statistical tests are important, and we politely point out that our claims on 3.3.2 are in fact based on the reported numbers in Table 1 “topographic ρ” column. However, we will evaluate the statistical significance of the “topographic ρ” measure by calculating the null distribution via a repeated shuffling of the Levenshtein distances (or an additional test if the reviewer has an alternative suggestion).\n\n<review>\n3. As noted above the main novelty of this work is the use of contemporary network models\n</review>\n\nWe believe the novelty of this work is to take the well-defined and interesting questions that the language evolution literature has posed and try to scale them up to contemporary deep learning models and materials, i.e., realistic stimuli in terms of objects and their properties (see Section 3), raw pixel stimuli (see Section 4) and neural network architectures (see Section 2). This kind of interdisciplinary work can not only inform current models on their strengths and weaknesses (as we note in Section 4 we find that neural networks starting from raw pixels cannot out-of-the-box process easily stimuli in a compositional way), but also open up new possibilities for language evolution research in terms of more realistic model simulations. We believe that this might not have been clear from the manuscript and will update the abstract and conclusion to reflect the premises of the work.\n\n<review>\nOne of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.\n</review>\n\nMore complex image stimuli and realistic simulations is where we are heading. However, we (as a community) first need to understand how these models behave with raw pixels before scaling them up to complex stimuli. The nature of this work was to lay the groundwork on this question and investigate the properties of protocols in controlled (yet realistic in terms of nature) environments where we can tease apart clearly the behaviour of the model given the small number of variations of the pixel stimuli (object color/shape/position and floor color). Performing the type of careful analysis we did for complex scenes is substantially harder due to the very large number of factors we would have to control (diverse objects of multiple colors, shapes, sizes, diverse backgrounds etc) so it puts into question to what degree we could have achieved a similar degree of introspection by immediately using more complex datasets in the current study.\n\n<review>\nOverall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here.\n</review>\n\nWe will upload an updated version of our paper by the end of this week containing \n1) the statistical test of the null distribution \n2) clarifications regarding the topographic measure and \n3) we will clarify the main contributions of this work and better relate it to the existing literature in language evolution\n\nMoreover, we would be really happy to conduct further analyses and clarify the exposition of results. If the reviewer has specific suggestions on this, we would like to hear them in order to improve the quality of the manuscript and strengthen our submission. \n" ]
[ 7, 9, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "SJWDw1iXG", "H15X_V8yM", "HJ3-u2Ogf", "HJ3-u2Ogf", "BytyNwclz" ]
iclr_2018_Hkbd5xZRb
Spherical CNNs
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective. In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression.
accepted-oral-papers
This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelling tasks such as 3d shape recognition and molecular energy prediction. Reviewers agreed this is a solid, well-written paper, which demonstrates the usefulness of group invariance/equivariance beyond the standard Euclidean translation group in real-world scenarios. It will be a great addition to the conference.
train
[ "r1VD9T_SM", "r1rikDLVG", "SJ3LYkFez", "B1gQIy9gM", "Bkv4qd3bG", "r1CVE6O7f", "Sy9FmTuQM", "ryi-Q6_Xf", "HkZy7TdXM", "S1rz4yvGf" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "How to describe the relationships between these two papers?", "Thank you for the feedback; I maintain my opinion.", "Summary:\n\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns in spherical signals irrespective of how they are rotated on the sphere. The key is to make the convolutional architecture rotation equivariant.\n\nPros:\n\n+ novel/original proposal justified both theoretically and empirically\n+ well written, easy to follow\n+ limited evaluation on a classification and regression task is suggestive of the proposed approach's potential\n+ efficient implementation\n\nCons:\n\n- related work, in particular the first paragraph, should compare and contrast with the closest extant work rather than merely list them\n- evaluation is limited; granted this is the nature of the target domain\n\nPresentation:\n\nWhile the paper is generally written well, the paper appears to conflate the definition of the convolutional and correlation operators? This point should be clarified in a revised manuscript. \n\nIn Section 5 (Experiments), there are several references to S^2CNN. This naming of the proposed approach should be made clear earlier in the manuscript. As an aside, this appears a little confusing since convolution is performed first on S^2 and then SO(3). \n\nEvaluation:\n\nWhat are the timings of the forward/backward pass and space considerations for the Spherical ConvNets presented in the evaluation section? Please provide specific numbers for the various tasks presented.\n\nHow many layers (parameters) are used in the baselines in Table 2? If indeed there are much less parameters used in the proposed approach, this would strengthen the argument for the approach. On the other hand, was there an attempt to add additional layers to the proposed approach for the shape recognition experiment in Sec. 5.3 to improve performance?\n\nMinor Points:\n\n- some references are missing their source, e.g., Maslen 1998 and Kostolec, Rockmore, 2007, and Ravanbakhsh, et al. 2016.\n\n- some sources for the references are presented inconsistency, e.g., Cohen and Welling, 2017 and Dieleman, et al. 2017\n\n- some references include the first name of the authors, others use the initial \n\n- in references to et al. or not, appears inconsistent\n\n- Eqns 4, 5, 6, and 8 require punctuation\n\n- Section 4 line 2, period missing before \"Since the FFT\"\n\n- \"coulomb matrix\" --> \"Coulomb matrix\"\n\n- Figure 5, caption: \"The red dot correcpond to\" --> \"The red dot corresponds to\"\n\nFinal remarks:\n\nBased on the novelty of the approach, and the sufficient evaluation, I recommend the paper be accepted.\n\n", "The focus of the paper is how to extend convolutional neural networks to have built-in spherical invariance. Such a requirement naturally emerges when working with omnidirectional vision (autonomous cars, drones, ...).\n\nTo get invariance on the sphere (S^2), the idea is to consider the group of rotations on S^2 [SO(3)] and spherical convolution [Eq. (4)]. To be able to compute this convolution efficiently, a generalized Fourier theorem is useful. In order to achieve this goal, the authors adapt tools from non-Abelian [SO(3)] harmonic analysis. The validity of the idea is illustrated on 3D shape recognition and atomization energy prediction. \n\nThe paper is nicely organized and clearly written; it fits to the focus of ICLR and can be applicable on many other domains as well.\n", "First off, this paper was a delight to read. The authors develop an (actually) novel scheme for representing spherical data from the ground up, and test it on three wildly different empirical tasks: Spherical MNIST, 3D-object recognition, and atomization energies from molecular geometries. They achieve near state-of-the-art performance against other special-purpose networks that aren't nearly as general as their new framework. The paper was also exceptionally clear and well written.\n\nThe only con (which is more a suggestion than anything)--it would be nice if the authors compared the training time/# of parameters of their model versus the closest competitors for the latter two empirical examples. This can sometimes be an apples-to-oranges comparison, but it's nice to fully contextualize the comparative advantage of this new scheme over others. That is, does it perform as well and train just as fast? Does it need fewer parameters? etc.\n\nI strongly endorse acceptance.", "Thank you for the kind words, we're glad you like our work! \n\nOur models for SHREC17 and QM7 both use only about 1.4M parameters. On a machine with 1 Titan X GPU, training the SHREC17 model takes about 50 hours, while the QM7 model takes only about 3 hours. Memory usage is 8GB for SHREC (batchsize 16) and 7GB for QM7 (batchsize 20).\n\nWe have studied the SHREC17 paper [1], but unfortunately it does not state the number of parameters or training time for the various methods. It does seem likely that each of the competition participants did their own cross validation, and arrived at an appropriate model complexity for their method. It is thus unlikely that the strong performance of our model relative to others can be explained by its size (especially since 1.4M parameters is not considered very large anymore).\n\nFor QM7, it looks like Montavon et al. used about 760k parameters (we have deduced this from the description of their network architecture). Since the model is a simple multi-layer perceptron applied to a hand-designed feature representation, we expect that it is substantially faster to train than our model (though indeed comparing a spherical CNN to an engineered features+MLP approach is a bit of an apples-to-oranges comparison). Raj et al. use a non-parametric method, so there is no parameter count or training time to compare to.\n\n[1] M. Savva et al. SHREC’17 Track Large-Scale 3D Shape Retrieval from ShapeNet Core55, Eurographics Workshop on 3D Object Retreival (2017).", "Thank you for the detailed and balanced review.\n\nRE Related work: we have expanded the related work section a little bit in order to contrast with previous work. (Unfortunately there is no space for a very long discussion)\n\nRE Convolution vs correlation: thank you for pointing this out. Our reasoning had been that:\n1) Everybody in deep learning uses the word \"convolution\" to mean \"cross-correlation\".\n2) In the non-commutative case, there are several different but essentially equivalent convolution-like integrals that one can define, with no really good reason to prefer one over the other.\n\nBut we did not explain this properly. We think a reasonable approach is to call something group convolution if, for the translation group it specializes to the standard convolution, and similarly for group correlations. This seems to be what several others before us have done as well, so we will follow this convention. Specifically, we will define the (group) cross-correlation as:\n psi \\star f(g) = int psi(g^{-1} h) f(h) dh.\n\nRE The S^2CNN name: we have now defined this term in the introduction, but not changed it, because the paper is called \"Spherical CNN\" and S^2-CNN is just a shorthand for that name.\n\nRE Timings: we have added timings, memory usage numbers, and number of parameters to the paper. It is not always possible to compare the number of parameters to related work because those numbers are not always available. However, we can reasonably assume that the competing methods did their own cross-validation to arrive at an optimal model complexity for their architecture. (Also, in deep networks, the absolute number of parameters can often vary widely between architectures that have a similar generalization performance, making this a rather poor measure of model complexity.)\n\nRE References and other minor points: we have fixed all of these issues. Thanks for pointing them out.", "Thank you very much for taking the time to review our work.", "Thank you for these references, they are indeed very relevant and interesting*. We will add them and change the text.\n\nWe agree that the cross-correlation is the right term, and have fixed it in the paper. We have added further discussion of this issue in reply to reviewer 2, who raised a similar concern.\n\n* We do not have access to Rafaely's book through our university library, so we cannot comment on it.\n", " In page 5: \"This says that the SO(3)-FT of the S2 convolution (as we have defined it) of two spherical signals can be computed by taking the outer product of the S2-FTs of the signals. This is shown in figure 2. We were unable to find a reference for the latter version of the S2 Fourier theorem\"\n\n The result is presented at least in:\n - Makadia et al. (2007), eq (21),\n - Kostelec and Rockmore (2008), eq (6.6),\n - Gutman et al. (2008), eq (9),\n - Rafaely (2015), eq (1.88).\n\n All mentioned references define \"spherical correlation\" as what you define as \"spherical convolution\". I believe it makes more sense to call it correlation, since it can be seen as a measure of similarity between two functions (given two functions on S2 and transformations on SO(3), the correlation function measures the similarity as a function of the transformation).\n\n References:\n Makadia, A., Geyer, C., & Daniilidis, K., Correspondence-free structure from motion, International Journal of Computer Vision, 75(3), 311–327 (2007).\n Kostelec, P. J., & Rockmore, D. N., Ffts on the rotation group, Journal of Fourier analysis and applications, 14(2), 145–179 (2008).\n Gutman, B., Wang, Y., Chan, T., Thompson, P. M., & Toga, A. W., Shape registration with spherical cross correlation, 2nd MICCAI workshop on mathematical foundations of computational anatomy (pp. 56–67) (2008).\n Rafaely B. Fundamentals of spherical array processing. Berlin: Springer; (2015).\n" ]
[ -1, -1, 8, 7, 9, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hkbd5xZRb", "ryi-Q6_Xf", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "Bkv4qd3bG", "SJ3LYkFez", "B1gQIy9gM", "S1rz4yvGf", "iclr_2018_Hkbd5xZRb" ]
iclr_2018_S1CChZ-CZ
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
accepted-oral-papers
this submission presents a novel way in which a neural machine reader could be improved. that is, by learning to reformulate a question specifically for the downstream machine reader. all the reviewers found it positive, and so do i.
train
[ "r10KoNDgf", "HJ9W8iheM", "Hydu7nFeG", "Hk9DKzYzM", "H15NIQOfM", "SJZ0UmdfM", "BkGlU7OMz", "BkXuXQufM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions. \n\nIn general, the paper is well-written (although there are a lot of typos and grammatical errors that need to be corrected), and the main ideas are clear. It would have been useful to provide some more details and carry out additional experiments to strengthen the merit of the proposed model. \n\nEspecially, in Section 4.2, more details about the quality of paraphrasing after training with the multilingual, monolingual, and refined models would be helpful. Which evaluation metrics were used to evaluate the quality? Also, more monolingual experiments could have been conducted with state-of-the-art neural paraphrasing models on WikiQA and Quora datasets (e.g. see https://arxiv.org/pdf/1610.03098.pdf and https://arxiv.org/pdf/1709.05074.pdf). \n\nMore details with examples should be provided about the variants of AQA along with the oracle model. Especially, step-by-step examples (for all alternative models) from input (original question) to question reformulations to output (answer/candidate answers) would be useful to understand how each module/variation is having an impact towards the best possible answer/ground truth.\n\nAlthough experiments on SearchQA demonstrate good results, I think it would be also interesting to see the results on additional datasets e.g. MS MARCO (Nguyen et al., 2016), which is very similar to the SearchQA dataset, in order to confirm the generalizability of the proposed approach. \n\n---------------------------------------------\nThanks for revising the paper, I am happy to update my scores.", "This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting formulation, as promoted by some recent work, this paper does not provide compelling reasons why it's a good formulation. The lack of serious comparisons to baseline methods makes it hard to judge the value of this work.\n\nDetailed comments/questions:\n\t1. I am actually quite confused on why it's a good RL setting. For a human user, having a series of queries to search for the right answer is a natural process, but it's not natural for a computer program. For instance, each query can be viewed as different formulation of the same question and can be issued concurrently. Although formulated as an RL problem, it is not clear to me whether the search result after each episode has been used as the immediate environment feedback. As a result, the dependency between actions seems rather weak.\n\t2. I also feel that the comparisons to other baselines (not just the variation of the proposed system) are not entirely fair. For instance, the baseline BiDAF model has only one shot, namely using the original question as query. In this case, AQA should be allowed to use the same budget -- only one query. Another more realistic baseline is to follow the existing work on query formulation in the IR community. For example, 20 shorter queries generated by methods like [1] can be used to compare the queries created by AQA.\n\n[1] Kumaran & Carvalho. \"Reducing Long Queries Using Query Quality Predictors\". SIGIR-09\n\t\nPros:\n\t1. An interesting RL formulation for query reformulation\n\nCons:\n\t1. The use of RL is not properly justified\n\t2. The empirical result is not convincing that the proposed method is indeed advantageous \n\n---------------------------------------\n\nAfter reading the author response and checking the revised paper, I'm both delighted and surprised that the authors improved the submission substantially and presented stronger results. I believe the updated version has reached the bar and recommend accepting this paper. ", "This article clearly describes how they designed and actively trained 2 models for question reformulation and answer selection during question answering episodes. The reformulation component is trained using a policy gradient over a sequence-to-sequence model (original vs. reformulated questions). The model is first pre-trained using a bidirectional LSTM on multilingual pairs of sentences. A small monolingual bitext corpus is the uses to improve the quality of the results. A CNN binary classifier performs answer selection. \n\nThe paper is well written and the approach is well described. I was first skeptical by the use of this technique but as the authors mention in their paper, it seems that the sequence-to-sequence translation model generate sequence of words that enables the black box environment to find meaningful answers, even though the questions are not semantically correct. Experimental clearly indicates that training both selection and reformulation components with the proposed active scheme clearly improves the performance of the Q&A system. ", "We have summarized the comparison of the AQA agent in different modes versus the baselines in Table 1.", "We thank Reviewer 1 for the encouraging feedback!", "Thanks for your review and suggestions! We address each point below:\n\nOther datasets\nWe agree that it will be important to extend the empirical evaluation on new datasets. Our current experimental setup cannot be straightforwardly applied to MsMarco, unfortunately. Our environment (the BiDAF QA system) is an extractive QA system. However, MsMarco contains many answers (55%) that are not substrings of the context; even after text normalization, 36% are missing. We plan to investigate the use of generative answer models for the environment with which we could extend AQA to this data. \n\nParaphrasing quality\nRegarding stand-alone evaluation of the paraphrasing quality of our models, we ran several additional experiments inspired by the suggested work.\nWe focused on the relation between paraphrasing quality and QA quality. To tease apart the relationship between paraphrasing and reformulation for QA we evaluated 3 variants of the reformulator:\n\nBase-NMT: this is the model used to initialize RL training of the agent. Trained first on the multilingual U.N. corpus, then on the Paralex corpus.\nBase-NMT-NoParalex: is the model above trained solely on the multilingual U.N. corpus, without the Paralex monolingual corpus.\nBase-NMT+Quora: is the same as Base-NMT, additionally trained on the Quora duplicate question dataset.\n\nFollowing Prakash et al. (2016) we evaluated all models on MSCOCO, selecting one out of five captions at random from Val2014 and using the other 4 as references. We use beam search, as in the paper, to compute the top hypothesis and report uncased, moses-tokenized BLEU using John Clark's multeval. [github.com/jhclark/multeval]\nThe Base-NMT model performs at 11.4 BLEU (see Table 1 for the QA eval numbers). Base-NMT-NoParalex performs poorly at 5.0 BLEU. Limiting training to the multilingual data alone also degrades QA performance: the scores of the Top Hypothesis are at least 5 points lower in all metrics, CNN scores are 2-3 points lower for all metrics.\nBy training on additional monolingual data, the Base-NMT+Quora model BLEU score improves marginally to 11.6. End-to-end QA performance also improves marginally, the maximum delta with respect to Base-NMT under all conditions is +0.5 points, but the difference is not statistically significant. Thus, adding the Quora training does not have a significant effect. This might be due to the fact that most of the improvement is captured by training on the larger Paralex data set.\n\nImproving raw paraphrasing quality as well as reformulation fluency help AQA up to a point. However, they are only partially aligned with the main task, which is QA performance. The AQA-QR reformulator has a BLEU score of 8.6, well below both Base-NMT models trained on monolingual data. AQA-QR significantly outperforms all others in the QA task. Training the agent starting from the Base-NMT+Quora model yielded identical results as starting from Base-NMT.\n\nExamples\nWe have updated Appendix A in the paper with the answers corresponding to all queries, together with their F1 scores. We also added a few examples (Appendix B) where the agent is not able to identify the correct candidate reformulation, even if present in the candidate set. We also added an appendix (C) with example paraphrases from MSCOCO from the different models.\n\nPresentation\nWe spelled and grammar checked the manuscript.\n", "Thanks for your review, questions, and suggestions which we address below:\n\n1- RL formulation\nWe require RL (policy gradient) because (a) the reward function is non-differentiable, and (b) we are optimizing against a black box environment using only queries, i.e. no supervised query transformation data (query to query that works better for a particular QA system) is available.\nWithout RL we could not optimize these reformulations against the black-box environment to maximize expected answer quality (F1 score).\n\nRegarding the training process you are correct: in this work, the reformulations of the initial query are indeed issued concurrently, as shown in Figure 1. We note this when we introduce the agent in the first paragraph of Section 2; we say “The agent then generates a *set* of reformulations {q_i}” rather than a sequence. \n\nIn the last line of the conclusion, we comment that we plan to extend AQA to sequential reformulations which would then depend on the previous questions/answers also. \n\n2- Baseline comparisons\nWe computed an IR baseline following [Kumaran & Carvalho, 2009] as suggested. We implemented the candidate generation method (Section 4.3) of their system to generate subquery reformulations of the original query. We choose the reformulations from the term-level subsequences of length 3 to 6. We associate each reformulation with a graph, where the vertices are the terms and the edges are the mutual information between terms. We rank the reformulations by the average edge weights of the Maximum Spanning Trees of the corresponding graphs. We keep the top 20 reformulations, the same number as we keep for the AQA agent. Then, we train a CNN to score these reformulations to identify those with above-average F1, in exactly the same way we do for the AQA agent. As suggested, we then compare this method both in terms of choosing the single top hypothesis (1 shot), and ensemble prediction (choose from 20 queries).\nWe additionally compare AQA to the Base-NMT system in the same way. This is the pre-trained monolingual seq2seq model used to initialize the RL training. We evaluate the Base-NMT model's top hypothesis (1 shot) and in ensemble mode.\n\nWe find that the AQA agent outperforms all other methods both in 1-shot prediction (top hypothesis) and using CNNs to pick a hypothesis from 20. To verify that the difference in performance is statistically significant we ran a statistical test. The null hypothesis is always rejected (p<0.00001).\nAll results are summarized and discussed in the paper.\n\nPS - After reviewing the suggested paper, and related IR literature we took the opportunity to add an IR query quality metric, QueryClarity, to our qualitative analysis at the end of the paper, in the box plot. QueryClarity contributes to our conclusion. showing that the AQA agent learns to transform the initial reformulations (Base-NMT) into ones that have higher QueryClarity, in addition to having better tf-idf and worse fluency.", "We would like to thank the reviewers for their valuable comments. It took us as a few weeks to reply because we took the time to implement as much as possible of the feedback. We believe this has benefited the paper significantly. We have uploaded a new version of the pdf with the additional work and reply here to the specific comments in greater detail." ]
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "BkGlU7OMz", "Hydu7nFeG", "r10KoNDgf", "HJ9W8iheM", "iclr_2018_S1CChZ-CZ" ]
iclr_2018_rJTutzbA-
On the insufficiency of existing momentum schemes for Stochastic Optimization
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching. Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.
accepted-oral-papers
The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned that the paper was missing a discussion of some related algorithms (ASVRG and ASDCA) that were handled in discussion.
train
[ "Sy3aR8wxz", "Sk0uMIqef", "Sy2Sc4CWz", "SkEtTX6Xz", "BJqEtWdMf", "SyL2ub_fM", "rkv8dZ_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people start looking an accelerating in stochastic formulations. There is already accelerated SVRG, Jain et al 2017, or even Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737).\n\nI would somehow split the contributions into two parts:\n1) Theoretical contribution: Proposition 3 (+ proofs in appendix)\n2) Experimental comparison.\n\nI like the experimental part (it is written clearly, and all experiments are described in a lot of detail).\n\nI really like the Proposition 3 as this is the most important contribution of the paper. (Indeed, Algorithms 1 and 2 are for reference and Algorithm 3 was basically described in Jain, right?). \n\nSignificance: I think that this paper is important because it shows that the classical HB method cannot achieve acceleration in a stochastic regime.\n\nClarity: I was easy to read the paper and understand it.\n\nFew minor comments:\n1. Page 1, Paragraph 1: It is not known only for smooth problems, it is also true for simple non-smooth (see e.g. https://link.springer.com/article/10.1007/s10107-012-0629-5)\n2. In abstract : Line 6 - not completely true, there is accelerated SVRG method, i.e. the gradient is not exact there, also see Recht (https://arxiv.org/pdf/1701.03863.pdf) or Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737) for some examples where acceleration can be proved when you do not have an exact gradient.\n3. Page 2, block \"4\" missing \".\" in \"SGD We validate\"....\n4. Section 2. I think you are missing 1/2 in the definition of the function. Otherwise, you would have a constant \"2\" in the Hessian, i.e. H= 2 E[xx^T]. So please define the function as f_i(w) = 1/2 (y - <w,x_i>)^2. The same applies to Section 3.\n5. Page 6, last line, .... was downloaded from \"pre\". I know it is a link, but when printed, it looks weird. \n\n", "I wonder how the ASGD compares to other optimization schemes applicable to DL, like Entropy-SGD, which is yet another algorithm that provably improves over SGD. This question is also valid when it comes to other optimization schemes that are designed for deep learning problems. For instance, Entropy-SGD and Path-SGD should be mentioned and compared with. As a consequence, the literature analysis is insufficient. \n\nAuthors provided necessary clarifications. I am raising my score.\n\n\n\n\n", "I only got access to the paper after the review deadline; and did not have a chance to read it until now. Hence the lateness and brevity.\n\nThe paper is reasonably well written, and tackles an important problem. I did not check the mathematics. \n\nBesides the missing literature mentioned by other reviewers (all directly relevant to the current paper), the authors should also comment on the availability of accelerated methods inn the finite sum / ERM setting. There, the questions this paper is asking are resolved, and properly modified stochastic methods exist which offer acceleration over SGD (and not through minibatching). This paper does not comment on these developments. Look at accelerated SDCA (APPROX, ASDCA), accelerated SVRG (Katyusha) and so on.\n\nProvided these changes are made, I am happy to suggest acceptance.\n\n\n\n", "We group the list of changes made to the manuscript based on suggestions of reviewers:\n\nAnonReviewer 3:\n- Added a paragraph on accelerated and fast methods for finite sums and their implications in the deep learning context. (in related work)\n\nAnonReviewer 2:\n- Included reference on Acceleration for simple non-smooth problems. (in page 1)\n- Included reference on Accelerated SVRG and other suggested references. (in related work)\n- Fixed citations for pytorch/download links and fixed typos.\n\nAnonReviewer 1:\n- Added a paragraph on entropic sgd and path normalized sgd and their complimentary nature compared to this work's message (in related work section).\n\nOther changes:\n- In the related work: background about Stochastic Heavy Ball, adding references addressing reviewer feedback.\n- Removed statement on generalization/batch size. (page 2)\n- Fixed minor typos. (page 3)\n- Added comment about NAG lower bound conjecture. (page 4, below proposition 3)", "Thanks for the references, we have included them in the paper and added a paragraph in Section 6 providing detailed comparison and key differences that we summarize below: \n \nASDCA, Katyusha, accelerated SVRG: these methods are \"offline\" stochastic algorithms that is they require multiple passes over the data and require multiple rounds of full gradient computation (over the entire training data). In contrast, ASGD is a single pass algorithm and requires gradient computation only a single data point at a time step. In the context of deep learning, this is a critical difference, as computing gradient over entire training data can be extremely slow. See Frostig, Ge, Kakade, Sidford ``Competing with the ERM in a single pass\" (https://arxiv.org/pdf/1412.6606.pdf) for a more detailed discussion on online vs offline stochastic methods. \n\nMoreover, the rate of convergence of the ASDCA depend on \\sqrt{\\kappa n} while the method studied in this paper has \\sqrt{\\kappa \\tilde{kappa}} dependence where \\tilde{kappa} can be much smaller than n. \n\n\n\n\n\n\n\n\n\n", "Thanks for your comments. \n\nWe have cited Entropy SGD and Path SGD papers and discuss the differences in Section 6 (related works). However, both the methods are complementary to our method. \n\nEntropy SGD adds a local strong convexity term to the objective function to improve generalization. However, currently we do not understand convergence rates or generalization performance of the technique rigorously, even for convex problems. The paper proposes to use SGD to optimize the altered objective function and mentions that one can use SGD+momentum as well (below algorithm box on page 6). Naturally, one can use the ASGD method as well to optimize the proposed objective function in the paper. \n\nPath SGD uses a modified SGD like update to ensure invariance to the scale of the data. Here again, the main goal is orthogonal to our work and one can easily use ASGD method in the same framework. \n", "Thanks a lot for insightful comments. We have updated the paper taking into account several of your comments. We will make more updates according to your suggestions. \n\n\nPaper organization: we will try to better organize the paper to highlight the contributions. \nProposition 3's importance: yes, your assessment is spot on.\n\nMinor comment 1,2: Thanks for pointing the minor mistake, we have updated the corresponding lines. Papers such as Accelerated SVRG, Recht et al. are offline stochastic accelerated methods. The paper of Richtarik (arXiv:1706.01108) deals with solving consistent linear systems in the offline setting; (arXiv:1710.10737) is certainly relevant and we will add more detailed comparison with this line of work. \nMinor comment 3, 5: thanks for pointing out the typos. They are fixed. \nMinor comment 4: Actually, the problem is a discrete problem where one observes one hot vectors in 2-dimensions, each of the vectors can occur with probability 1/2. So this is the reason why the Hessian does not carry an added factor of 2.\n\n\n" ]
[ 7, 7, 8, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "Sy2Sc4CWz", "Sk0uMIqef", "Sy3aR8wxz" ]
iclr_2018_Hk6kPgZA-
Certifying Some Distributional Robustness with Principled Adversarial Training
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
accepted-oral-papers
This paper attracted strong praise from the reviewers, who felt that it was of high quality and originality. The broad problem that is being tackled is clearly of great importance. This paper also attracted the attention of outside experts, who were more skeptical of the claims made by the paper. The technical merits do not seem to be in question, but rather, their interpretation/application. The perception by a community as to whether an important problem has been essentially solved can affect the choices made by other reviewers when they decide what work to pursue themselves, evaluate grants, etc. It's important that claims be conservative and highlight the ways in which the present work does not fully address the broader problem of adversarial examples. Ultimately, it has been decided that the paper will be of great interest to the community. The authors have also been entrusted with the responsibility to consider the issues raised by the outside expert (and then echoed by the AC) in their final revisions. One final note: In their responses to the outside expert, the authors several times remark that the guarantees made in the paper are, in form, no different from standard learning-theoretic claims: "This criticism, however, applies to many learning-theoretic results (including those applied in deep learning)." I don't find any comfort in this statement. Learning theorists have often focused on the form of the bounds (sqrt(m) dependence and, say, independence from the # of weights) and then they resort to empirical observations of correlation to demonstrate that the value of the bound is predictive for generalization. because the bounds are often meaningless ("vacuous") when evaluated on real data sets. (There are some recent examples bucking this trend.) In a sense, learning theorists have gotten off easy. Adversarial examples, however, concern security, and so there is more at stake. The slack we might afford learning theorists is not appropriate in this new context. I would encourage the authors to clearly explain any remaining work that needs to be done to move from "good enough for learning theory" to "good enough for security". The authors promise to outline important future work / open problems for the community. I definitely encourage this.
train
[ "S1pdil8Sz", "rkn74s8BG", "HJNBMS8rf", "rJnkAlLBf", "H1g0Nx8rf", "rklzlzBVf", "HJ-1AnFlM", "HySlNfjgf", "rkx-2-y-f", "rkix5PTQf", "rJ63YwTQM", "HyFBKPp7z", "Hkzmdv67G", "rJBbuPTmz", "Hk2kQP3Qz", "BJVnpJPXM", "H1wDpaNbM" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public" ]
[ "We just received an email notification abut this comment a few minutes ago and somehow did not receive any notification of the original comment uploaded on 21 January. We will upload a response later today.", "Apologies for the (evidently) tardy response. We have now uploaded a response to the area chair's comments (see below).", "Thank you for the detailed follow-up. \n\nWe will make the point that we deal with imperceptible changes clearer in the paper. We had emphasized that our work is motivated by imperceptible adversarial perturbations from the second paragraph of the paper. We will make this point even clearer and quantify our statements on performance so that there is no confusion that we mainly consider imperceptible changes.\n\nAs we have noted in our previous response, we agree with you in that robustness to larger perturbations is an important research direction. The point we made in our original response is that infinity-norms may not be the most appropriate norm to consider in this perceptible attack setting. For example, a 1-norm-constrained adversary can change a few pixels in a very meaningful way--with infinity-norms approaching 1--which may be a more suitable model for a perceptible adversary. There are a number of concurrent works on this topic that we believe could lead to more robust learning systems. \n\nIt is still open whether distributionally robust algorithms (empirically) allow hedging against large adversarial perturbations. At this point, we believe it would be imprudent to call this class of methods “inherently restricted” to the small perturbation regime; indeed, any heuristic method (such as one based on projected gradient descent) has the same restrictions, at least in terms of rigorous guarantees. A more thorough study—on more diverse datasets, model classes and hyperparameter settings—should be conducted in order to draw any meaningful conclusions. We hope to contribute to this effort in the future but we invite others as well, since we believe this is an important question for the community to answer.\n\nOur certificate of robustness given in Theorem 3 is efficiently computable for small values of rho, or equivalently, for imperceptible attacks. Hence, this data-dependent certificate provides a upper bound on the worst-case loss so that you are guaranteed to do no worse than this number with high probability. For the achieved level of robustness (rho hat in our notation), our bounds do imply that we are robust to perturbation budgets of this size. Hence, we would argue that Theorem 3 is indeed a flavor of result that satisfies the desiderata you described.\n\nThere are limitations, and we hope that subsequent work will improve our learning guarantees with a better dependence on model size. This criticism, however, largely applies to most learning-theoretic results applied to deep learning.\n\nAs we mentioned in our introduction, we agree that recent advances in verification techniques for deep learning are a complementary and important research direction for achieving robust learning systems. Our understanding of these techniques is that they currently have prohibitive computational complexity, even on small datasets such as MNIST. Our results complement these approaches by providing a weaker statistical guarantee with computational effort more comparable to the vanilla training times.\n\nThe motivation of this paper comes from the fact that formal guarantees on arbitrary levels of robustness is NP-hard. We study the regime of small to moderate levels of robustness to provide guarantees for this regime.", "Sorry for the rush created by this likely OpenReview bug. A response today would be most appreciated!", "You have been contacted now by the Area Chair and the Program Chair and asked to respond to comments by the Area Chair. It is imperative that you respond.", "[I put my reply here as the threads below are now a bit hard to follow.]\n\nThank you for responding to my comments and making the effort to provide more data. This indeed helps me understand this work better.\n\nI agree that studying the regime of small adversarial perturbation budget epsilon is a very valid research goal. I think, however, that it is important to explicitly mention in the paper that this is the target. Especially, as the proposed methods seem to be inherently restricted to apply to only such a small epsilon regime. \n\nI am not sure though that I agree with the argument why the regime of larger values of epsilon might be less interesting. Yes, some of the larger perturbations will be clearly visible to a human, but some (e.g., the ones that correspond to a change of the background color or its pattern) will not - and we still would like to be robust to them. After all, security guarantees are about getting \"for all\", not \"for some\" guarantees. \n\nNow, regarding being explicit about the constants in the bounds, I agree that many optimization and statistical learning guarantees do not provide not provide explicit constants. However, I think the situation in the context considered here is fundamentally different. \n\nAfter all, for example, in the context of generalization bounds, we always have a meaningful way of checking if a given bound \"triggered\" for a given model and dataset by testing its performance on a validation/test set. When we talk about robustness guarantee, the whole point is to have it hold even against attacks that we are not able to produce ourselves (but the adversary might). Then, we really need a very concrete guarantee of the form \"(With high probability) the model classifies correctly 90% of the test set against perturbation budget of epsilon <= 0.1”. \n\nIn the light of this, providing a guarantee of the form \"(With high probability) the model correctly classifies 90% of the test set against perturbation budget of some positive epsilon\", which is what the proposed guarantees seem to provide, is somewhat less meaningful. (One could argue that, after all, there is always some positive epsilon for which the model is robust.)\n\nIt might be worth noting that, e.g., for MNIST, we currently are able to deliver guarantees of the former (explicit) type. For instance, there is a recent work of Kolter and Wong (https://arxiv.org/abs/1711.00851). Although they provide such guarantees via verification techniques and not by proving an explicit generalization bound.\n\nFinally, I am not sure how much bearing the formal NP-hardness of certifying the robustness has here. (I assume you are referring to the result in Appendix B.) Could you elaborate?", "This paper proposes a principled methodology to induce distributional robustness in trained neural nets with the purpose of mitigating the impact of adversarial examples. The idea is to train the model to perform well not only with respect to the unknown population distribution, but to perform well on the worst-case distribution in some ball around the population distribution. In particular, the authors adopt the Wasserstein distance to define the ambiguity sets. This allows them to use strong duality results from the literature on distributionally robust optimization and express the empirical minimax problem as a regularized ERM with a different cost. The theoretical results in the paper are supported by experiments.\n\nOverall, this is a very well-written paper that creatively combines a number of interesting ideas to address an important problem.", "This paper applies recently developed ideas in the literature of robust optimization, in particular distributionally robust optimization with Wasserstein metric, and showed that under this framework for smooth loss functions when not too much robustness is requested, then the resulting optimization problem is of the same difficulty level as the original one (where the adversarial attack is not concerned). I think the idea is intuitive and reasonable, the result is nice. Although it only holds when light robustness are imposed, but in practice, this seems to be more of the case than say large deviation/adversary exists. As adversarial training is an important topic for deep learning, I feel this work may lead to promising principled ways for adversarial training. ", "In this very good paper, the objective is to perform robust learning: to minimize not only the risk under some distribution P_0, but also against the worst case distribution in a ball around P_0.\n\nSince the min-max problem is intractable in general, what is actually studied here is a relaxation of the problem: it is possible to give a non-convex dual formulation of the problem. If the duality parameter is large enough, the functions become convex given that the initial losses are smooth. \n\nWhat follows are certifiable bounds for the risk for robust learning and stochastic optimization over a ball of distributions. Experiments show that this performs as expected, and gives a good intuition for the reasons why this occurs: separation lines are 'pushed away' from samples, and a margin seems to be increased with this procedure.", "Thank you for your interest in our paper. We appreciate your detailed feedback.\n\n1. This is a fair criticism; it seems to apply generally to most learning-theoretic guarantees on deep learning (though see the recent work of Dziugaite and Roy, https://arxiv.org/abs/1703.11008 and Bartlett, Foster, Telgarsky https://arxiv.org/pdf/1706.08498.pdf). We believe that our statistical guarantees in Theorems 3 and 4 are steps towards a principled understanding of adversarial training. Replacing our current covering number arguments with more intricate notions such as margin based-bounds (Bartlett et al. 2017)) would extend the scope of our theoretical guarantees; as Bartlett et al. provide covering number bounds, it seems likely that we could massage them into applying in Theorem 3 (Eqs. (11)-(12)). This is a meaningful future research direction.\n\n\n2. In Figure 2, we plot our certificate of robustness on two datasets (omitting the statistical error term) and observe that our data-dependent upper bound on the worst-case performance is reasonable. This roughly implies that our adversarial training procedure generalizes, allowing us to learn to defend against attacks on the test set.\n\n“In the experimental sections, good performance is achieved at test time. But it would be more convincing if the performance for training data is also shown. The current experiments don't seem to evaluate generalization of the proposed WRM. Furthermore, analysis of other classification problems (cifar10, cifar 100, imagenet) is highly desired.“\n\nThese are both great suggestions. We are currently working on experiments with subsets of Imagenet and will include them in a revision (soon we hope).\n\n3. Our adversarial training algorithm has intimate connections with other previously proposed heuristics. Our main theoretical contribution is that for small adversarial perturbations, we can show both computational and statistical guarantees for our procedure. More specifically, the computational guarantees for our algorithm are indeed based on the curvature of the L2-norm; provably efficient computation of attacks based on infinity-norms remains open.", "Thank you for your interest in our paper. We appreciate the detailed feedback and probing questions.\n\nUpon your suggestions during our meeting at NIPS, we have included a more extensive empirical evaluation of our algorithm. Most notably, we trained and tested our method—alongside other baselines, including [MMSTV17]—on large values of adversarial budgets. We further compared our algorithm trained against L2-norm Lagrangian attacks against other heuristic methods trained against infinity-norm attacks. Lastly, we proposed a (heuristic) proximal variant of our algorithm that learns to defend against infinity-norm attacks. See Appendices A.4, A.5, and E for the relevant exposition and figures.\n\n1. Empirical Evaluation on Large Adversarial Budgets\n\nOur primary motivation of this paper is to provide a theoretically principled algorithm that can defend against small adversarial perturbations. In particular, we are concerned with provable procedures against small adversarial perturbations that can fool deep nets but are imperceptible to humans. Our main finding in the original empirical experiments in Section 4 was that for such small adversarial perturbations, our principled algorithm matches or outperforms other existing heuristics. (See also point 2 below.)\n\nThe adversarial budget epsilon = .3 in the infinity-norm you suggest allows attacks that are highly visible to the human eye. For example, one can construct hand-tuned perturbations that look significantly different from the original image (see https://www.dropbox.com/sh/c6789iwhnooz5po/AABBpU_mg-FRRq7PT1LzI0GAa?dl=0). Defending against such attacks is certainly interesting, but was not our main goal. This probably warrants a longer discussion, but it is not clear to us that infinity-norm-bounded attacks are most appropriate if one allows perceptible image modifications. An L1-budgeted adversary might be able to make small changes in some part of the image, which yields a different set of attacks.\n\nIn spite of the departure of large perturbations from our nominal goal of protection against small changes, we test our algorithm on attacks with large adversarial budgets in Appendix A.4. In this case, our algorithm is a heuristic—as are other methods for large adversarial budgets—but we nevertheless match the performance of other methods (FGM, IFGM, PGM) trained against L2-norm adversaries.\n\nSince our computational guarantees are based on strong concavity w.r.t. Lp-norms for p \\in (1, 2], our robustly-fitted network defends against L2-norm attacks. Per the suggestion to compare against networks trained to defend against infinity-norm attacks—and we agree, this is an important comparison that we did not perform originally (though we should have)—we compared our method with other heuristics in Appendix A.5.1. On imperceptible L2 and infinity-norm attacks, our algorithm outperforms other heuristics trained to defend against infinity-norm attacks (Figures 11 and 12). On larger attacks, particularly infinity-norm attacks, we observe that other heuristics trained on infinity-norm attacks outperform our method (Figure 12). In this sense, the conclusions we reached from the main figures in our paper—where we considered imperceptible perturbations—are still valid: we match or outperform other heuristic methods for small perturbations.\n\n(Continued in Part II)", "2. Theoretical Guarantees\n\nThe motivation for our work is that computing the worst-case perturbation of a deep network under norm-constraints is typically intractable. As we state in the introduction, we simply give up on computing worst-case perturbations at arbitrary budget levels, instead considering small adversarial perturbations. Our theoretical guarantees are concerned with imperceptible changes; we give computational and statistical guarantees for such small (adversarial) perturbations. This is definitely a limit of the approach; given that it is NP hard to certify robustness for larger perturbations this may be challenging to get around.\n\nOur main theoretical guarantee is the certificate of robustness—a data-dependent upper bound on the worst-case performance—given in Theorem 3. This upper bound applies in general, although its efficient computation is only guaranteed for large penalty parameters \\gamma and smooth losses. Similarly, as you note, Theorems 2 and 4 only apply in such regimes. To address this, we augment our theoretical guarantees for small adversarial budgets with empirical evaluations in Section 4 and Appendix A. We empirically checked if our level of \\gamma = .385 (=.04 * C_2) is above the estimated smoothness parameter at the adversarially trained model and observed that this condition is satisfied on 98% of the training data points.\n\nOur guarantees indeed depend on the problem-dependent smoothness parameter. As with most optimization and statistical learning guarantees, this value is often unknown. This limitation applies to most learning-theoretic results, and we believe that being adaptive to such problem-dependent constants is a meaningful future research direction. With that said, it seems likely (though we have not had time to verify this) that the recent work of Bartlett et al. (https://arxiv.org/pdf/1706.08498.pdf) should apply--it provides covering number bounds our Theorem 3 (Eq. (11-12)) can use.\n\nWe hope that our theoretical guarantees are a step towards understanding the performance of these adversarial training procedures. Gaps still remain; we hope future work will close this gap.", "Thank you for bringing our attention to Roy et al. (2017). In Section 4.3, we adapted our adversarial training algorithm in the supervised learning setting to reinforcement learning; this approach shares similar motivations as Roy et al. (2017)—and more broadly, the robust MDP literature—where we also solve approximations of the worst-case Bellman equation. Compared to our Wasserstein ball, Roy et al. (2017) uses more simple and tractable worst-case regions. While they give convergence guarantees for their algorithm, the empirical performance of these different worst-case regions remains open.\n\nAnother key difference in our experiments is that we assumed access to the simulator for updating the underlying state. This allows us to explore bad regions better. Nevertheless, our adversarial state update in Eqn (20) can be replaced with an adversarial reward update for settings where the simulator cannot be accessed.", "We thank the reviewers for their time and positive feedback. We will use the comments and suggestions to improve the quality and presentation the paper. In addition to cleaning up our exposition, we added some content to make our main points more clear. We address these main revisions below.\n\nOur formulation (2) is general enough to include a number of different adversarial training scenarios. In Section 2 (and more thoroughly in Appendix D), we detail how our general theory can be modified in the supervised learning setting so that we learn to defend against adversarial perturbations to only the feature vectors (and not the labels). By suitably modifying the cost function that defines the Wasserstein distance, our formulation further encompasses other variants such as adversarial perturbations only to a fixed small region of an image.\n\nWe emphasize that our certificate of robustness given in Theorem 3 applies for any level of robustness \\rho. Our results imply that the output of our principled adversarial training procedure has worst-case performance no worse than this data-dependent certificate. Our certificate is efficiently computable, and we plot it in Figure 2 for our experiments. We see that in practice, the bound indeed gives a meaningful performance guarantee against attacks on the unseen test sets.\n\nWhile the primary focus of our paper is on providing provable defenses against imperceptible adversarial perturbations, we supplement our previous results with a more extensive empirical evaluation. In Appendix A.4, we augment our results by evaluating performance against L2-norm adversarial attacks with larger adversarial budgets (higher values of \\rho or \\epsilon). Our method also becomes a heuristic for such large values of adversarial budgets, but we nevertheless match the performance of other methods (FGM, IFGM, PGM) trained against L2-norm adversaries. In Appendix A.5.1, we further compare our method——which is trained to defend against L2-norm attacks——with other adversarial training algorithms trained against inf-norm attacks. We also propose a new (heuristic) proximal algorithm for solving our Lagrangian problem with inf-norms, and test its performance against other methods in Appendix A.5.2. In both sections, we observe that our method is competitive with other methods against imperceptible adversarial attacks, and performance starts to degrade as the attacks become visible to the human eye.\n\nAgain, we appreciate the reviewers' close reading and thoughtful comments.", "The problems are very well formulated (although only the L2 case is discussed). Identifying a concave surrogate in this mini-max problem is illuminating. The interplay between optimal transport, robust statistics, optimization and learning theory make the work a fairly thorough attempt at this difficult problem. Thanks to the authors for turning many intuitive concepts into rigorous maths. There are some potential concerns, however: \n\n1. The generalization bounds in THM 3, Cor 1, THM 4 for deep neural nets appear to be vacuous, since they scale like \\sqrt (d/n), but d > n for deep learning. This is typical, although such generalization bounds are not common in deep adversarial training. So establishing such bounds is still interesting.\n\n2. Deep neural nets generalize well in practice, despite the lack of non-vacuous generalization bounds. Does the proposed WRM adversarial training procedure also generalize despite the vacuous bounds? \n\nIn the experimental sections, good performance is achieved at test time. But it would be more convincing if the performance for training data is also shown. The current experiments don't seem to evaluate generalization of the proposed WRM. Furthermore, analysis of other classification problems (cifar10, cifar 100, imagenet) is highly desired. \n\n3. From an algorithmic viewpoint, the change isn't drastic. It appears that it controls the growth of the loss function around the L2 neighbourhood of the data manifold (thanks to the concavity identified). Since L2 geometry has good symmetry, it makes the decision surface more symmetrical between data (Fig 1). \n\nIt seems to me that this is the reason for the performance gain at test time, and the size of such \\epsilon tube is the robust certificate. So it is unclear how much success is due to the generalization bounds claimed. \n\nI think there is enough contribution in the paper, but I share the opinion of Aleksander Madry, and would like to be corrected for missing some key points.", "Developing principled approaches to training adversarially robust models is an important (and difficult) challenge. This is especially the case if such an approach is to offer provable guarantees and outperform state of the art methods. \n\nHowever, after reading this submission, I am confused by some of the key claims and find them to be inaccurate and somewhat exaggerated. In particular, I believe that the following points should be addressed and clarified:\n\n1. The authors claim their methods match or outperform existing methods. However, their evaluations seem to miss some key baselines and parameter regimes. \n \nFor example, when reporting the results for l_infty robustness - a canonical evaluation setting in most previous work - the authors plot (in Figure 2b) the robustness only for the perturbations whose size eps (as measured in the l_infty norm) is between 0 and 0.2. (Note that in Figure 2b the x-axis is scaled as roughly 2*eps.) However, in order to properly compare against prior work, one needs to be able to see the scaling for larger perturbations.\n\nIn particular, [MMSTV’17] https://arxiv.org/abs/1706.06083 gives a model that exhibits high robustness even for perturbations of l_infty size 0.3. What robustness does the approach proposed in this work offer in that regime? \n\nAs I describe below, my main worry is that the theorems in this work only apply for very small perturbations (and, in fact, this seems to be an inherent limitation of the whole approach). Hence, it would be good to see if this is true in practice as well. \nIn particular, Figure 2b suggests that this method will indeed not work for larger perturbations. I thus wonder in what sense the presented results outperform/match previous work?\n\nAfter a closer look, it seems that this discrepancy occurs because the authors are reproducing the results of [MMSTV’17] using l_2 based adversarial training. [MMSTV’17] uses l_infity based training and achieves much better results than those reported in this submission. This artificially handicaps the baseline from [MMSTV’17]. That is, there is a significantly better baseline that is not reflected in Figure 2b. I am not sure why the authors decided to do that.\n\n2. It is hard to properly interpret what actual provable guarantees the proposed techniques offer. More concretely, what is the amount of perturbation that models trained using these techniques are provably robust to? \n\nBased on the presented theorems, it is unclear why they should yield any non-vacuous generalization bounds. \n\nIn particular, as far as I can understand, there might be no uniform bound on the amount of perturbation that the trained model will be robust to. This seems to be so as the provided guarantees (see Theorem 4) might give different perturbation resistance for different regions of the underlying distribution. In fact, it could be that for a significant fraction of points we have a (provable) robustness guarantee only for vanishingly small perturbations. \n\nMore precisely, note that the proposed approach uses adversarial training that is based on a Lagrangian formulation of finding the worst case perturbation, as opposed to casting this primitive as optimization over an explicitly defined constraint set. These two views are equivalent as long as one has full flexibility in setting the Lagrangian penalization parameter gamma. In particular, for some instances, one needs to set gamma to be *small enough*, i.e., sufficiently small so as it does not exclude norm-eps vectors from the set of considered perturbations. (Here, eps denotes the desired robustness measured in a specific norm such as l_infty, i.e., the prediction of our model should not change under perturbations of magnitude up to eps.)\n\nHowever, the key point of the proposed approach is to ensure that gamma is always set to be *large enough* so as the optimized function (i.e., the loss + the Lagrangian penalization) becomes concave (and thus provably tractable). Specifically, the authors need gamma to be large enough to counterbalance the (local) smoothness parameter of the loss function. \n\nThere seems to be no global (and sufficiently small) bound on this smoothness and, as a result, it is unclear what is the value of the eps-based robustness guarantee offered once gamma is set to be as large as the proposed approach needs it to be.\n\nFor the same reason (i.e., the dependence on the smoothness parameter of the loss function that is not explicitly well bounded), the provided generalization bounds - and thus the resulting robustness guarantees - might be vacuous for actual deep learning models. \n\nIs there something I am missing here? If not, what is the exact nature of the provable guarantees that are offered in the proposed work?\n", "Very interesting work! I was wondering how the robust MDP/RL setup compares to http://papers.nips.cc/paper/6897-reinforcement-learning-under-model-mismatch.pdf ? " ]
[ -1, -1, -1, -1, -1, -1, 9, 9, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1g0Nx8rf", "rJnkAlLBf", "rklzlzBVf", "S1pdil8Sz", "iclr_2018_Hk6kPgZA-", "rJBbuPTmz", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "Hk2kQP3Qz", "BJVnpJPXM", "BJVnpJPXM", "H1wDpaNbM", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-" ]
iclr_2018_HktK4BeCZ
Learning Deep Mean Field Games for Modeling Large Population Behavior
We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP. This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning. Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.
accepted-oral-papers
The reviewers are unanimous in finding the work in this paper highly novel and significant. They have provided detailed discussions to back up this assessment. The reviewer comments surprisingly included a critique that "the scientific content of the work has critical conceptual flaws" (!) However, the author rebuttal persuaded the reviewers that the concerns were largely addressed.
val
[ "BkGA_x3SG", "ByGPUUYgz", "rJLBq1DVM", "S1PF1UKxG", "rJBLYC--f", "BycoZZimG", "HyRrEDLWG", "SJJDxd8Wf", "r1D9GPUbf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We appreciate your suggestions for further improving the precision of our language, and we understand the importance of doing so for the work to be useful to researchers in collective behavior. \n\nWe agree with most of your suggestions, and we will make all necessary edits for the final version of the paper if accepted:\n\n1. Title. Since we present our work as a method for representation and prediction, and the aim was not to argue for or against the existence of group optimization, we will avoid wording that may be construed as such from the title.\n\n2. Using causal language. It was our mistake to overlook this in the abstract in the first revision round. We will make a more thorough review of the whole text.\n\n3. We used the phrase ''single-agent'' when describing the MDP, to emphasize the shift in viewpoint from MFG to MDP. MFG focused on the transition vector P_i of people in a discrete state i, along with separate values V_i, while the constructed MDP treats the transition matrix P and aggregated value V(pi) as single entities. However, we agree with your remark that the qualifier ''single-agent'' is redundant and not entirely accurate, given the fact (as you pointed out) that cooperative multi-agent games with global rewards can be solved by finding a single centralized control. \n\n4. There certainly needs to be more work in ''data-driven MFG'', in order to understand the connection between a learned reward function and the motivations behind real individual actions, if such connection exists. At the expense of being more verbose, we can clarify the quoted statements. By giving a framework for learning a reward from data, we only take an early step to show that there are such questions to be answered.\n\n5. Regarding the ''two approximate descriptions\", we simply meant to summarize: an MFG is a model of an equilibrium resulting from individual optimization, an MDP has an optimal trajectory, we showed their equivalence in a special case, and this enabled learning a model with high accuracy. We thought that the word ''description'' was already far enough from the language used in physics (e.g. 'momentum _is_ conserved' as opposed to ''the quantity called 'momentum' is conserved in our descriptive model''; of course, they are entirely justified in speaking that way), but we can qualify it further.\n\n6. We will make sure to validate on synthetic MFG in future work, to make a stronger case.\n\nThe linked Nature letter is interesting and pertinent. In the case of preferential attachment, there seems to be a choice between two classes of mechanisms that can reproduce observations, one solely based on randomization, the other involving a notion of optimization and agency. In our case, the discussion is centered on how to describe a reduction from a model of equilibrium among individual actions, to a centralized control problem. \n\nThank you for the references to irrational crowd behavior. Economics and finance are some of the early motivators for MFG, and we will need to keep this point in mind if we extend to these areas.", "This paper attacks an important problems with an interesting and promising methodology. The authors deal with inference in models of collective behavior, specifically at how to infer the parameters of a mean field game representation of collective behavior. The technique the authors innovate is to specify a mean field game as a model, and then use inverse reinforcement learning to learn the reward functions of agents in the mean field game.\n\nThis work has many virtues, and could be an impactful piece. There is still minimal work at the intersection of machine learning and collective behavior, and this paper could help to stimulate the growth of that intersection. The application to collective behavior could be an interesting novel application to many in machine learning, and conversely the inference techniques that are innovated should be novel to many researchers in collective behavior.\n\nAt the same time, the scientific content of the work has critical conceptual flaws. Most fundamentally, the authors appear to implicitly center their work around highly controversial claims about the ontological status of group optimization, without the careful justification necessary to make this kind of argument. In addition to that, the authors appear to implicitly assume that utility function inference can be used for causal inference. \n\nThat is, there are two distinct mistakes the authors make in their scientific claims:\n1) The authors write as if mean field games represent population optimization (Mean field games are not about what a _group_ optimizes; they are about what _individuals_ optimize, and this individual optimization leads to certain patterns in collective behaviors)\n2) The authors write as if utility/reward function inference alone can provide causal understanding of collective or individual behavior\n\n1 - \n\nI should say that I am highly sympathetic to the claim that many types of collective behavior can be viewed as optimizing some kind of objective function. However, this claim is far from mainstream, and is in fact highly contested. For instance, many prominent pieces of work in the study of collective behavior have highlighted its irrational aspects, from the madness of crowds to herding in financial markets.\n\nSince it is so fringe to attribute causal agency to groups, let alone optimal agency, in the remainder of my review I will give the authors the benefit of the doubt and assume when they say things like \"population behavior may be optimal\", they mean \"the behavior of individuals within a population may be optimal\". If the authors do mean to say this, they should be more careful about their language use in this regard (individuals are the actors, not populations). If the authors do indeed mean to attribute causal agency to groups (as suggested in their MDP representation), they will run into all the criticisms I would have about an individual-level analysis and more. Suffice it to say, mean field games themselves don't make claims about aggregate-level optimization. A Nash equilibrium achieves a balance between individual-level reward functions. These reward functions are only interpretable at the individual level. There is no objective function the group itself in aggregate is optimizing in mean field games. For instance, even though the mean field game model of the Mexican wave produces wave solutions, the model is premised on people having individual utility functions that lead to emergent wave behavior. The model does not have the representational capacity to explain that people actually intend to create the emergent behavior of a wave (even though in this case they do). Furthermore, the fact that mean field games aggregate to a single-agent MDP does not imply that that the group can rightfully be thought of as an agent optimizing the reward function, because there is an exact correspondence between the rewards of the individual agents in the MFG and of the aggregate agent in the MDP by construction.\n\n2 -\n\nThe authors also claim that their inference methods can help explain why people choose to talk about certain topics. As far as the extent to which utility / reward function inference can provide causal explanations of individual (or collective) behavior, the argument that is invariably brought against a claim of optimization is that almost any behavior can be explained as optimal post-hoc with enough degrees of freedom in the utiliy function of the behavioral model. Since optimization frameworks are so flexible, they have little explanatory power and are hard to falsify. In fact, there is literally no way that the modeling framework of the authors even affords the possibility that individual/collective behavior is not optimal. Optimality is taken as an assumption that allows the authors to infer what reward function is being optimized. \n\nThe authors state that the reward function they infer helps to interpret collective behavior because it reveals what people are optimizing. However, the reward function actually discovered is not interpretable at all. It is simply a summary of the statistical properties of changes in popularity of the topics of conversation in the Twitter data the authors' study. To quote the authors' insights: \"The learned reward function reveals that a real social media population favors states characterized by a highly non-uniform distribution with negative mass gradient in decreasing order of topic popularity, as well as transitions that increase this distribution imbalance.\" The authors might as well have simply visualized the topic popularities and changes in popularities to arrive at such an insight. To take the authors claims literally, we would say that people have an intrinsic preference for everyone to arbitrarily be talking about the same thing, regardless of the content or relevance of that topic. To draw an analogy, this is like observing that on some days everybody on the street is carrying open umbrellas and on other days not, and inferring that the people on the street have a preference for everyone having their umbrellas open together (and the model would then predict that if one person opens an umbrella on a sunny day, everybody else will too).\n\nTo the authors credit, they do make a brief attempt to present empirical evidence for their optimization view, stating succinctly: \"The high prediction accuracy of the learned policy provides evidence that real population behavior can be understood and modeled as the result of an emergent population-level optimization with respect to a reward function.\" Needless to say, this one-sentence argument for a highly controversial scientific claims falls flat on closer inspection. Setting aside the issues of correlation versus causation, predictive accuracy does not in and of itself provide scientific plausibility. When an n-gram model produces text that is in the style of a particular writer, we do not conclude that the writer must have been composing based on the n-gram's generative mechanism. Predictive accuracy only provides evidence when combined in the first place with scientific plausibility through other avenues of evidence.\n\nThe authors could attempt to address these issues by making what is called an \"as-if\" argument, but it's not even clear such an argument could work here in general. \n\nWith all this in mind, it would be more instructive to show that the inference method the authors introduce could infer the correct utility functions used in standard mean field games, such as modeling traffic congestion and the Mexican wave. \n\n--\n\nAll that said, the general approach taken in the authors' work is highly promising, and there are many fruitful directions I would be exicted to see this work taken --- e.g., combining endogenous and exogenous rewards or looking at more complex applications. As a technical contribution, the paper is wonderful, and I would enthusiastically support acceptance. The authors simply either need to be much more careful with the scientific claims about collective behavior they make, or limit the scope of the contribution of the paper to be modeling / inference in the area of collective behavior. Mean field games are an important class of models in collective behavior, and being able to infer their parameters is a nice step forward purely due to the importance of that class of games. Identifying where the authors' inference method could be applied to draw valid scientific conclusions about collective behavior could then be an avenue for future work. Examples of plausible scientific applications might include parameter inference in settings where mean field games are already typically applied in order to improve the fit of those models or to learn about trade-offs people make in their utility functions in those settings.\n\n--\n\nOther minor comments:\n- (Introduction) It is not clear at all how the Arab Spring, Black Lives Matter, and fake news are similar --- i.e., whether a single model could provide insight into these highly heterogeneous events --- nor is it clear what end the authors hope to achieve by modeling them --- the ethics of modeling protests in a field crowded with powerful institutional actors is worth carefully considering.\n- If I understand correctly, the fact that the authors assume a factored reward function seems limiting. Isn't the major benefit of game theory it's ability to accommodate utility functions that depend on the actions of others?\n- The authors state that one of their essential insights is that \"solving the optimization problem of a single-agent MDP is equivalent to solving the inference problem of an MFG.\" This statement feels a bit too cute at the expense of clarity. The authors perform inference via inverse-RL, so it's more clear to say the authors are attempting to use statistical inference to figure out what is being optimized.\n- The relationship between MFGs and a single-agent MDP is nice and a fine observation, but not as surprising as the authors frame it as. Any multiagent MDP can be naively represented as a single-agent MDP where the agent has control over the entire population, and we already know that stochastic games are closely related to MDPs. It's therefore hard to imagine that there woudn't be some sort of correspondence. ", "\nI appreciate the authors responses to the comments in my review. The paper is much improved with respect to both of the main issues I brought up, with a few minor exceptions that I assume the authors overlooked in their revisions (detailed below). \n\nI think the paper title is misleading. \"Learning optimal behavior policy\" sounds like the authors will be making a normative claim. Just to throw something out there to illustrate what I personally would find less misleading, I would say something like \"Learning Deep Mean Field Games for Prediction and Statistical Description of Large Populations\"\n\nIn the abstract the authors state \"We consider the problem of representing a large population’s behavior policy that drives the evolution of the population distribution over a discrete state space.\" \n- With the word \"drives\", the authors here are using the causal language that I complained about previously. \n\nIn this second round of review, I have realized that calling the MDP a \"single-agent\" MDP is confusing and inaccurate. It would more aptly be called a centralized policy for a multiagent MDP, since the single-agent actually controls the distribution of actions of the population of agents. Even this wording is unnecessary, though, and I think simply calling it an \"MDP\" with no qualifier would be perfectly clear. (The fact that centralized multiagent MDPs can be solved via single-agent optimization is well-known in multiagent systems.) Including the \"single-agent\" adjective still hints at the group-as-agent frame that I criticized in my original review.\n\n\"The learned reward is a step towards understanding population behavior from an optimization perspective.\" \n- This sentence is certainly true, in the sense that the methods of the authors might some day be applied to models that could help us understand human optimization patters. However, the claim is a bit deceptive since people may be optimizing something very different from what is represented / inferred by the MFG model. I still think it is safest to interpret what the MFG is doing as creating a useful description of the statistics of population behavior. The authors could see https://www.nature.com/articles/nature11486 for an example of the scientific debate relating to the kind of description that their model offers.\n\n\"though we do not have access to a ground truth reward function\" \n- There may not be a ground truth reward function. People are not necessarily optimizing anything.\n\n\"To test the usefulness of the reward and MFG model,\" \n- Test the usefulness for what? Interpretabiltiy is useful too! Probably the authors mean \"To test the usefulness of the MFG model for prediction\"\n\nIn their \"insights\" section, the authors state: \"Under the assumption that measured behavior is optimal for the constructed MDP, the learned reward function favors states with large negative mass gradient in decreasing order of initial topic popularity, and transitions that increase this distribution imbalance.\" \n- To argue for interpretability, the authors should provide insights that are interpretable to lay people. This sentence is hard to parse and should be expanded in simple language that explains what is learned to a non-technical audience.\n\n\"The model’s high prediction accuracy supports two approximate descriptions of population behavior: an equilibrium of individuals optimizing the MFG reward; equivalently, an optimal trajectory for the constructed MDP\" \n- I don't know what this sentence means. The epistemology of model comparison for scientific inference if far from settled, barely even ever discussed by anyone as far as I know, and I would be hesitant to conclude anything about the science of collective behavior from the authors' results. I would say that the authors have shown that data-fitted MFGs are useful for statistical description and prediction.\n\nI may have missed some other lingering instances of my same two original complaints, so I implore the authors to do a careful read-through with these criticisms in mind. For this work to be understood and taken seriously be researchers in collective behavior, it is critical to be precise in wording about what can actually be claimed. I think the framing of fitted MFGs as useful for statistical description and prediction is accurate and would be interesting to researchers in collective behavior.\n\nI still encourage the authors to investigate validating their method via inferring reward functions of well-known MFG from simulation traces (not necessarily for this paper, although that would be nice, but especially if the authors ever submit a longer version, e.g.).\n\nExamples of irrational crowd behaviror:\n- LeBon \"Extraordinary Popular Delusions & the Madness of Crowds\"\n- Schiller \"Irrational Exuberance\"\n\n", "The paper considers the problem of representing and learning the behavior of a large population of agents, in an attempt to construct an effective predictive model of the behavior. The main concern is with large populations where it is not possible to represent each agent individually, hence the need to use a population level description. The main contribution of the paper is in relating the theories of Mean Field Games (MFG) and Reinforcement Learning (RL) within the classic context of Markov Decision Processes (MDPs). The method suggested uses inverse RL to learn both the reward function and the forward dynamics of the MFG from data, and its effectiveness is demonstrated on social media data. \nThe paper contributes along three lines, covering theory, algorithm and experiment. The theoretical contribution begins by transforming a continuous time MFG formulation to a discrete time formulation (proposition 1), and then relates the MFG to an associated MDP problem. The first contribution seems rather straightforward and appears to have been done previously, while the second is interesting, yet simple to prove. However, Theorem 2 sets the stage for an algorithm developed in section 4 of the paper that suggests an RL solution to the MFG problem. The key insight here is that solving an optimization problem on an MDP of a single agent is equivalent to solving the inference problem of the (population-level) MFG. Practically, this leads to learning a reward function from demonstrations using a maximum likelihood approach, where the reward is represented using a deep neural network, and the policy is learned through an actor-critic algorithm, based on gradient descent with respect to the policy parameters. The algorithm provides an improvement over previous approaches limited to toy problems with artificially created reward functions. Finally, the approach is demonstrated on real-world social data with the aim of recovering the reward function and predicting the future trajectory. The results compare favorably with two baselines, vector auto-regression and recurrent neural networks. \nI have found the paper to be interesting, and, although I am not an expert in MFGs, novel and well-articulated. Moreover, it appears to hold promise for modeling social media in general. I would appreciate clarification on several issues which would improve the presentability of the results. \n1)\tThe authors discuss on p. 6 variance reduction techniques. I would appreciate a more complete description or, at least, a more precise reference than to a complete paper. \n2)\tThe experimental results use state that “Although the set of topics differ semantically each day, indexing topics in order of decreasing initial popularity suffices for identifying the topic sets across all days.” This statement is unclear to me and I would appreciate a more detailed explanation. \n3)\tThe authors make the following statement: “ … learning the MFG model required only the initial population distribution of each day in the training set, while VAR and RNN used the distributions over all hours of each day.” Please clarify the distinction and between the algorithms here. In general, details are missing about how the VAR and RNN were run. \n4)\tThe approach uses expert demonstration (line 7 in Algorithm 1). It was not clear to me how this is done in the experiment.\n", "The paper proposes a novel approach on estimating the parameters \nof Mean field games (MFG). The key of the method is a reduction of the unknown parameter MFG to an unknown parameter Markov Decision Process (MDP).\n\nThis is an important class of models and I recommend the acceptance of the paper.\n\nI think that the general discussion about the collective behavior application should be more carefully presented and some better examples of applications should be easy to provide. In addition the authors may want to enrich their literature review and give references to alternative work on unknown MDP estimation methods cf. [1], [2] below. \n\n[1] Burnetas, A. N., & Katehakis, M. N. (1997). Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1), 222-255.\n\n[2] Budhiraja, A., Liu, X., & Shwartz, A. (2012). Action time sharing policies for ergodic control of Markov chains. SIAM Journal on Control and Optimization, 50(1), 171-195.", "The following changes were made to address our reviewer's comments:\n\nSection 1: Introduction\n1. Clarified the role of social media as a common factor among the three examples of large population events.\n2. Made it clearer that the assumption of optimality is ascribed to individuals, not to a population.\n3. Described applications of MFG in more detail.\n\nSection 2: Related work\n1. Added two references to earlier work in unknown MDP estimation.\n\nSection 4: Inference of MFG via MDP optimization\n1. Fixed notation for expected start value of policy (in paragraph immediately above Eqn 14).\n2. Changed citation for use of value function as a variance reduction technique.\n\nSection 5: Experiments\n1. Improved wording to make it clear that MFG is a descriptive model with an optimality assumption. \n2. Qualified an explanation for observed performance of MFG compared to other unstructured methods.\n3. Clarified the two equivalent ways of speaking about the model, from either the MFG or the MDP perspective.\n\nSection 6: Conclusion\n1. Improved wording to convey that we work with MFG only as a predictive framework, leaving aside the ontological status of a reward that drives physical processes in population movement.\n\nAppendix D\n1. Added more explanation of difference between how MFG and the alternative methods use training data.\n2. More description of VAR and RNN.", "Thank you for highlighting the main points of the paper in detail, and identifying that our contribution lies at the relatively underexplored intersection of RL and models of population behavior. To address the questions raised:\n\n1. There is a substantial amount of prior work on variance reduction in gradient-based methods for MDPs (Sutton & Barto 1998, Weaver & Tao 2001, Greensmith et al. 2004, Lawrence et al. 2003). Using the policy gradient theorem (e.g. section 13.2 in Sutton & Barto), one can see that subtracting any arbitrary state-dependent function from the action-value estimate does not introduce bias. As can be seen in eq 1 of Greensmith et al., subtracting a baseline reduces variance as long as the covariance is large. Therefore, an optimal baseline can be found to minimize variance (Weaver & Tao 2001). Sutton & Barto 1998 give an empirical demonstration in Figure 2.5 that an appropriately chosen baseline can speed up convergence. We can include a citation to section 3 of Sutton et al. 1999, as it gives an equivalent statement of using the value function as a baseline.\n\n2. Here is an example: suppose there are three topics (t1, t2, t3), and the initial count of participants at 9am of day 1 is (10, 5, 15). So we reorder the topics to be (t3, t1, t2) and relabel them as (s1, s2, s3). This is what we meant by ``indexing topics in order of decreasing initial popularity.'' On day 2, the topics may be semantically different, e.g. (t4, t5, t6), with initial participation counts (5, 15, 10), so we reorder them to be (t5, t6, t4), and again assign labels (s1, s2, s3). So now both t3 of day 1 and t5 of day 2 are relabeled (i.e. ``identified'') as s1. This is what we meant by ``identifying topic sets across all days.'' This is how we abstract away semantic content and only work with the distribution (i.e. ranking). This reordering lets us interpret our collected demonstration trajectories in a consistent manner, e.g. each trajectory is like running one episode of the constructed MDP, with different starting states (i.e. different initial pi^0) but with the same fixed topic set. Since real populations are influenced by both ranking and semantics, we acknowledge that this method limited the scope of the current work. It suggests a possible extension, e.g. augmenting our basic MFG model to account for topic semantics.\n\n3. This can be understood from line 4 of Algorithm 2 in Appendix B. For each training episode of the forward RL, we randomly pick a starting pi^0 from the collection of all measured initial distributions. During the single episode, the forward RL never uses the measured pi^1,...,pi^{N-1} because our constructed MDP provides the transition equation, which produces next states pi^{n+1} from pi^n and the action P^n produced by the policy being learned (lines 6 and 7 of Algorithm 2). In contrast, both VAR and RNN are classic examples of supervised learning, where each pi^n,...,pi^{n-m} (for some m) in the training set is used to predict pi^{n+1}. We will supplement Appendix D to describe VAR and RNN in more detail.\n\n4. Expert demonstration trajectories, sampled from the full set of measured trajectories in line 7 of Alg 1, are used to compute the loss in Equation 13. We take all the state-action pairs of the demonstration trajectories, pass them as a batch into the reward neural network, and add the resulting scalars to get the first term of Equation 13. Learning of the reward is done via gradient descent on this loss, with respect to the neural net parameters W. The process is the same for the second term, which uses trajectories generated from the policy at that iteration.", "We greatly appreciate your insightful and high quality feedback. We agree that the intersection of machine learning and modeling collective processes deserves more exploration. Overall, we will improve our language when describing population behavior, and interpretation the inference results more carefully. Below, we address the two critical concerns in detail. \n\n1. Interpretation of MFG as population-level or individual-level optimization\n\nIn all instances where we mention actions, decisions and optimality, we meant individuals. Some examples are ``aggregate effect of individual actions'' (abstract) and ``aggregate decisions of all individuals'' (intro). We did not intend to claim that MFG has a reward that the population itself as an agent tries to optimize. We absolutely agree that MFG models a Nash equilibrium arrived from individual choice, which is shown by equation 6 on page 4. To say that a single-agent policy is optimal for the constructed MDP reward is not the same as saying that any individual optimizes for this constructed reward. Our writing is in accord with the former, not the latter. To improve clarity, we will clearly say that individuals only optimize for the MFG reward, and that the optimal policy for the constructed MDP is only a tool for generating population trajectories, without making any claim about the ontological status of group optimization.\n\nOn a related note, could you refer us to particular works that highlight irrational aspects of collective behavior?\n\n2. Use of reward function to understand behavior\n\nIf we understood correctly, the concern about falsifiability is the following: given some optimization framework and demonstration data, there always exists a reward function for which the demonstration is optimal, which means that the hypothesis of optimality is vacuous. To take an extreme case, it is well known that inverse problems suffer from degeneracy, e.g. any behavior is optimal with respect to an all-zero reward (Ng & Russell 2000). But among all possible rewards that can be learned from data, many may not allow the forward dynamics to reproduce data similar to the observations. This is partly the reason that we evaluated predictive accuracy of the model, similar to the evaluation of IRL on task completion in robotics (Finn et al. 2016).\n\nRegarding interpretation of the reward, it is true that we could visualize the statistical distribution of population distributions and transition matrices, to see which types are favored. It is uncertain whether this is easier or harder for extracting insight from data. However, we do not fully understand why summarizing statistical properties of data has equal utility as learning a reward: e.g. in a finite-horizon gridworld with positive reward only at one terminal state, merely looking at statistics of expert trajectories reveals nothing about which state-action pair is good. We accept the advice to restrict our interpretation of reward to be within our model’s representational capacity, due to the lack of semantics in the model.\n\nWe acknowledge the warning about using predictive accuracy to justify claims about the physical world. Perhaps saying ``modeled’’, rather than ``understood’’, better conveys our intended message that the MFG only a descriptive model so far. We aimed to tackle the question ``What is a good description of population behavior’’, rather than the question ``Why does the population behave this way''.\n\nWe agree that recovering a pre-specified reward in a synthetic MFG is useful to show. We chose to focus exclusively on a real experiment because one of our main motivations was to ground MFG research on real observations.\n\nResponse to additional comments:\n1. We chose to motivate the population modeling problem using these events because: social media enabled a much larger population to participate virtually than would have been possible otherwise; each event involves a large population concentrated on the same general topic but differentiated into discrete subtopics; our experiment data comes from a social media population. We can add this clarification to our introduction.\n2. The use of r_{ij}(pi, P_i) rather than r_{ij}(pi, P) still couples V_i^n to the actions by individuals in other topics, because the choice of P_i partially determines the next state pi^{n+1}, which determines the actions by individuals at other topics j, which in turn affects V_i^n via the summation over j of V_j^{n+1}. This can be seen by unrolling equation (7) and using (3). The dependence is on actions taken by others at the next time step.\n3. The MaxEnt IRL procedure maximizes a log likelihood, so viewing it as either statistical inference or optimization should both be valid. Since MDP and RL are optimal control frameworks, we labeled it as optimization.\n4. We wrote with more emphasis in some places of the text because MFG may be less well-known in the community, hoping that a stronger tone may help with clarity.", "We highly appreciate your support for the merits of MFG models, especially in synthesis with the well-studied framework of MDP. We agree that our discussion of the collective behavior and interpretation of results should be presented more carefully, and we will update our wording to be more precise. For applications, we will further highlight the synthetic experiments in previous MFG research, and suggest the analogous real-world applications.\n\nThank you for directing us to alternative work in MDPs with unknown parameters.\n1. Looking at Burnetas & Katehakis (1997), we see a thematic similarity: they consider the case of an unknown transition law in finite state-action spaces, and also extend the analysis to a model where reward has distribution with unknown parameters dependent on states and actions. Likewise, we consider a reward function with unknown parameters to be learned. Although our constructed MDP has a known deterministic transition, we simulate the MDP and learn via RL to handle continuous states and action spaces. We agree that we should reference Burnetas & Katehakis' contribution to this research theme.\n2. If we understood Budhiraja, Liu & Shwartz (2012) correctly, they construct a class of action time sharing (ATS) policies that give the same long-term costs as a stationary Markov control, and which enable estimation of unknown model parameters (via deviation from optimal control) while maintaining the same cost per unit time. We agree that the problem of fulfilling a secondary objective while optimizing for a given cost (which doesn't necessarily depend on those secondary parameters) is an interesting one, and seems to be novel for RL research. We can see that the framework of simultaneously estimating unknown parameters while optimizing a known cost is related to the inverse RL framework used in our work, i.e. simultaneously learning an unknown cost and finding an optimal policy. Can we confirm that this is a correct understanding of your comment?\n" ]
[ -1, 8, -1, 8, 10, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
[ "rJLBq1DVM", "iclr_2018_HktK4BeCZ", "SJJDxd8Wf", "iclr_2018_HktK4BeCZ", "iclr_2018_HktK4BeCZ", "iclr_2018_HktK4BeCZ", "S1PF1UKxG", "ByGPUUYgz", "rJBLYC--f" ]
iclr_2018_HkL7n1-0b
Wasserstein Auto-Encoders
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality.
accepted-oral-papers
This paper proposes a new generative model that has the stability of variational autoencoders (VAE) while producing better samples. The authors clearly compare their work to previous efforts that combine VAEs and Generative Adversarial Networks with similar goals. Authors show that the proposed algorithm is a generalization of Adversarial Autoencoder (AAE) and minimizes Wasserstein distance between model and target distribution. The paper is well written with convincing results. Reviewers agree that the algorithm is novel and practical; and close connections of the algorithm to related approaches are clearly discussed with useful insights. Overall, the paper is strong and I recommend acceptance.
test
[ "Sy_QFsmHG", "HyBIaDXBM", "rJSDX-xSG", "SJQzLO_gM", "Hk2dO8ngz", "SJncf2gWz", "BkU7vv8ff", "SkxfpL8GG", "BJ2VpnZff", "BkrqpnbGG", "H1bGp3bfz", "SkGcPcZ-z" ]
[ "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "public" ]
[ "Let me clarify the markov chain point.\n\nIn the case Q(Z|X) is stochastic, the encode/decode chain X->Z->X' is stochastic. Namely, P(X'|X) is not a deterministic function, it is a distribution. A markov chain can be constructed if we sample X from P_X and use P(X'|X) as the transition probability.\n\nBy optimizing the Wasserstein distance between P(X') and P_X, we hope to get the parameter such that P(X') == P_X. The reconstruction term in this paper requires that X' == X, which is stronger than P(X') == P_X.", "Thank you for the question.\n\nUnfortunately, we did not quite get the point of your Markov chain example. But we would like to make it clear that the paper does not assume anything specific about the encoder Q(Z|X). As long as the aggregated posterior Qz matches the prior Pz, the encoder can be either deterministic or random. The same holds true for the WAE algorithm. We will try to emphasize it better in the updated version of the paper.\n\nThe decoder is indeed a different story: for Theorem 1 we need it to be deterministic, but a very similar result holds also for the random decoder (Supplementary B).", "Thanks for the great work. It's nice to see there is theoretical support for the (auto-encoder + constraint on Z) objective.\n\nIt seems to me the expectation over X could not be moved out in theorem 1, as this breaks the independence of Z and X.\nConsider the case Q(Z|X) is not deterministic, we can have a markov chain X_{t+1} ~ \\int_Z [ P_G(X'|Z)Q(Z|X_t) ], which has a stationary distribution same as P_X. The algorithm in this paper gives a special case where Q(Z|X) is deterministic and X_{t+1} = X_{t}.\n\nIn supplementary B, the case where the decoder is random is discussed. It would be nice to also discuss the cases where Q(Z|X) is random vs deterministic.\n\nDo correct me if I'm wrong, thanks.", "This paper satisfies the following necessary conditions for\nacceptance. The writing is clear and I was able to understand the\npresented method (and its motivation) despite not being too familiar\nwith the relevant literature. Explicitly writing the auto-encoder(s)\nas pseudo-code algorithms was particular helpful. I found no technical\nerrors. The problem addressed is one worth solving - building a\ngenerative model of observed data. There is some empirical testing\nwhich show the presented method in a good light.\n\nThe authors are careful to relate the presented method with existing\nones, most notably VAE and AAE. I suppose one could argue that the\nclose connection to existing methods means that this paper is not\ninnovative enough. I think that would be unfair - most new methods\nhave close relations with existing ones - it is just that sometimes\nthe authors do not flag this up as they should.\n\nWAE is a bit oversold. The authors state that WAE generates \"samples\nof better quality\" (than VAE) without any condition being put on when\nit does this. There is no proof that it is always better, and I can't\nsee how there could be. Any method of inferring a generative model\nfrom data must make some 'inductive' assumptions. Surely one could\ndevise situations where VAE outperforms WAE. I think this issue should\nhave been examined in more depth.\n\nI found no typo or grammatical errors which is unusual - good careful\njob!\n\n", "This very well written paper covers the span between W-GAN and VAE. For a reviewer who is not an expert in the domain, it reads very well, and would have been of tutorial quality if space had allowed for more detailed explanations. The appendix are very useful, and tutorial paper material (especially A). \n\nWhile I am not sure description would be enough to reproduce and no code is provided, every aspect of the architecture, if not described, if referred as similar to some previous work. There are also some notation shortcuts (not explained) in the proof of theorems that can lead to initial confusion, but they turn out to be non-ambiguous. One that could be improved is P(P_X, P_G) where one loses the fact that the second random variable is Y.\n\n\nThis work contains plenty of novel material, which is clearly compared to previous work:\n- The main consequence of the use of Wasserstein distance is the surprisingly simple and useful Theorem 1. I could not verify its novelty, but this seems to be a great contribution.\n- Blending GAN and auto-encoders has been tried in the past, but the authors claim better theoretical foundations that lead to solutions that do not rquire min-max\n- The use of MMD in the context of GANs has also been tried. The authors claim that their use in the latent space makes it more practival\n\nThe experiments are very convincing, both numerically and visually.\n\nSource of confusion: in algorithm 1 and 2, \\tilde{z} is \"sampled\" from Q_TH(Z|xi), some one is lead to believe that this is the sampling process as in VAEs, while in reality Q_TH(Z|xi) is deterministic in the experiments.", "This paper provides a reasonably comprehensive generalization to VAEs and Adversarial Auto-encoders through the lens of the Wasserstein metric. By posing the auto-encoder design as a dual formulation of optimal transport, the proposed work supports the use of both deterministic and random decoders under a common framework. In my opinion, this is one of the crucial contributions of this paper. While the existing properties of auto-encoders are preserved, stability characteristics of W-GANs are also observed in the proposed architecture. The results from MNIST and CelebA datasets look convincing, though could include additional evaluation to compare the adversarial loss with the straightforward MMD metric and potentially discuss their pros and cons. In some sense, given the challenges in evaluating and comparing closely related auto-encoder solutions, the authors could design demonstrative experiments for cases where Wassersterin distance helps and may be its potential limitations.\n\nThe closest work to this paper is the adversarial variational bayes framework by Mescheder et.al. which also attempts at unifying VAEs and GANs. While the authors describe the conceptual differences and advantages over that approach, it will be beneficial to actually include some comparisons in the results section.", "Dear Mathieu,\n\nthank you for the suggestion. We will update the paper accordingly.", "Congratulations on this nice paper. \n\nThe ability to remove one of the two marginal constraints in Theorem 1 relies on the assumption that P_G(Y|Z=z) is a Dirac. I know that you stated in the intro that you focus on deterministic maps but it would be nice to repeat the assumptions made in Theorem 1.", "We thank the reviewer for the positive feedback and the kind words regarding the overview part of the paper.\n\nWe will make sure to make notations clearer and include all the details of architectures used in experiments in the updated version of the paper. Of course we will also open source the code.", "We are pleased that the reviewer found the paper well written. \n\nWe tried to be modest in our claims, in particular we never implied that WAEs produce better samples for *all data distributions*. As noticed by the reviewer this would be indeed impossible to prove, especially because the question of how to evaluate and compare sample qualities of unsupervised generative models is still open. We will double-check that there are no bold and unsupported statements in the final version of the paper.", "We thank the reviewer for the positive feedback. \n\nComparing properties of WAE-MMD and WAE-GAN is indeed an intriguing direction and we intend to look into the details in our future research. In this paper we only report initial empirical observations, which can be concluded by saying that WAE-MMD enjoys a stable training but does not match Pz and Qz perfectly, while the training of WAE-GAN is not so stable but leads to much better matches once succeeded. \n\nIn this paper we decided that comparing to VAE was sufficient for our purposes: both VAE and AVB follow the same objective of maximizing the marginal log likelihood in contrast to the minimization of the optimal transport studied in our work. However, we do agree that in future it would be interesting to compute the FID scores of the AVB samples. ", "You state in the paper that the variational auto-encoder objective is composed of reconstruction cost plus a KL divergence term that captures how distinct the image by the encoder of each training example is from the prior p(z), and then go on to say that this KL term is not guaranteeing that the overall encoded distribution matches the prior p(z).\n\nHowever, as shown in the paper \"ELBO surgery: yet another way to carve up the variational evidence lower bound\" by Hoffman and Johnson, the KL term in the VAE objective can be decomposed into exactly this KL(q(z)||p(z)) between the average encoder distribution and the prior plus a mutual information term, and that the former is a heavy contributor towards the overall KL term. This means that VAE does indeed try to match the overall encoder distribution of q to the prior, but also includes a regularizing term that aims to minimize the mutual information between the hidden code z and the index of the observation x that encourages the VAE to have the encoder produce the same codes z for different observations.\n\nIn conclusion, it would be more accurate to state that in comparison to VAEs you simply exclude the mutual information regularisation term from the objective as formulated in the ELBO surgery paper." ]
[ -1, -1, -1, 8, 8, 8, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "HyBIaDXBM", "rJSDX-xSG", "iclr_2018_HkL7n1-0b", "iclr_2018_HkL7n1-0b", "iclr_2018_HkL7n1-0b", "iclr_2018_HkL7n1-0b", "SkxfpL8GG", "iclr_2018_HkL7n1-0b", "Hk2dO8ngz", "SJQzLO_gM", "SJncf2gWz", "iclr_2018_HkL7n1-0b" ]
iclr_2018_B1QRgziT-
Spectral Normalization for Generative Adversarial Networks
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
accepted-oral-papers
This paper presents impressive results on scaling GANs to ILSVRC2012 dataset containing a large number of classes. To achieve this, the authors propose "spectral normalization" to normalize weights and stabilize training which turns out to help in overcoming mode collapse issues. The presented methodology is principled and well written. The authors did a good job in addressing reviewer's comments and added more comparative results on related approaches to demonstrate the superiority of the proposed methodology. The reviewers agree that this is a great step towards improving the training of GANs. I recommend acceptance.
train
[ "SkQdbLclM", "H1xyfspez", "HJH-EWkWM", "rkDCavsmz", "r1Ko8X_Gf", "r1onL2xXM", "SJpmh_17f", "BkOctTAGf", "HJxGRNvMz", "BJAcWZobM", "BkIYgWs-z", "rynszWibz", "S1x6bLDZM", "SJok1XB-f", "rJrC3dhlz", "Hkci0r3lM", "SyTXZU2xz", "S1g_eb9gz", "ryjuZSQlG", "Hkgbu7Qgz", "SJmRwz7xG", "HJdKxkGxf", "Hkdca0ZlG", "B1w_vpZef", "r12BeAgeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "public", "author", "author", "author", "author", "public", "public", "public", "author", "author", "public", "public", "public", "author", "public", "public", "public", "public" ]
[ "This paper borrows the classic idea of spectral regularization, recently applied to deep learning by Yoshida and Miyato (2017) and use it to normalize GAN objectives. The ensuing GAN, coined SN-GAN, essentially ensures the Lipschitz property of the discriminator. This Lipschitz property has already been proposed by recent methods and has showed some success. However, the authors here argue that spectral normalization is more powerful; it allows for models of higher rank (more non-zero singular values) which implies a more powerful discriminator and eventually more accurate generator. This is demonstrated in comparison to weight normalization in Figure 4. The experimental results are very good and give strong support for the proposed normalization.\n\n\nWhile the main idea is not new to machine learning (or deep learning), to the best of my knowledge it has not been applied on GANs. The paper is overall well written (though check Comment 3 below), it covers the related work well and it includes an insightful discussion about the importance of high rank models. I am recommending acceptance, though I anticipate to see a more rounded evaluation of the exact mechanism under which SN improves over the state of the art. More details in the comments below.\n\nComments:\n1. One concern about this paper is that it doesn’t fully answer the reasons why this normalization works better. I found the discussion about rank to be very intuitive, however this intuition is not fully tested. Figure 4 reports layer spectra for SN and WN. The authors claim that other methods, like (Arjovsky et al. 2017) also suffer from the same rank deficiency. I would like to see the same spectra included. \n2. Continuing on the previous point: maybe there is another mechanism at play beyond just rank that give SN its apparent edge? One way to test the rank hypothesis and better explain this method is to run a couple of truncated-SN experiments. What happens if you run your SN but truncate its spectrum after every iteration in order to make it comparable to the rank of WN? Do you get comparable inception scores? Or does SN still win?\n3. Section 4 needs some careful editing for language and grammar.\n", "This paper proposes \"spectral normalization\" -- constraining the spectral norm of the weights of each layer -- as a way to stabilize GAN training by in effect bounding the Lipschitz constant of the discriminator function. The paper derives efficient approximations for the spectral norm, as well as an analysis of its gradient. Experimental results on CIFAR-10 and STL-10 show improved Inception scores and FID scores using this method compared to other baselines and other weight normalization methods.\n\nOverall, this is a well-written paper that tackles an important open problem in training GANs using a well-motivated and relatively simple approach. The experimental results seem solid and seem to support the authors' claims. I agree with the anonymous reviewer that connections (and differences) to related work should be made clearer. Like the anonymous commenter, I also initially thought that the proposed \"spectral normalization \" is basically the same as \"spectral norm regularization\", but given the authors' feedback on this I think the differences should be made more explicit in the paper.\n\nOverall this seems to represent a strong step forward in improving the training of GANs, and I strongly recommend this paper for publication.\n\nSmall Nits: \n\nSection 4: \"In order to evaluate the efficacy of our experiment\": I think you mean \"approach\".\n\nThere are a few colloquial English usages which made me smile, e.g. \n * Sec 4.1.1. \"As we prophesied ...\", and in the paragraph below \n * \"... is a tad slower ...\".", "The paper is motivated by the fact that in GAN training, it is beneficial to constrain the Lipschitz continuity of the discriminator. The authors observe that the product of spectral norm of gradients per each layer serves as a good approximation of the overall Lipschitz continuity of the entire discriminating network, and propose gradient based methods to optimize a \"spectrally normalized\" objective.\n\nI think the methodology presented in this paper is neat and the experimental results are encouraging. However, I do have some comments on the presentation of the paper:\n\n1. Using power method to approximate matrix largest singular value is a very old idea, and I think the authors should cite some more classical references in addition to (Yoshida and Miyato). For example,\n\nMatrix Analysis, book by Bhatia\nMatrix computation, book by Golub and Van Loan.\n\nSome recent work in theory of (noisy) power method might also be helpful and should be cited, for example,\nhttps://arxiv.org/abs/1311.2495\n\n2. I think the matrix spectral norm is not really differentiable; hence the gradients the authors calculate in the paper should really be subgradients. Please clarify this.\n\n3. It should be noted that even with the product of gradient norm, the resulting normalizer is still only an upper bound on the actual Lipschitz constant of the discriminator. Can the authors give some empirical evidence showing that this approximation is much better than previous approximations, such as L2 norms of gradient rows which appear to be much easier to optimize?", "HI, nice work!\n\nSpectral normalization seems to be useful as a regularization for GAN training. \n\nBut I'm considering that spectral normalization cannot ensure \"unshift of mean & variance\" of the output of each layer. Usually, we would like the mean = 0 and variance = 1 for a deep network to avoid vanishing/explosion of activation and gradient. \n\nIs this an issue with spectral normalization? and how should we handle it?\n\nThanks.", "Thanks for the explanation for the intuition of the spectral normailzation and gradient norm. It is clearer to me now.", "We owe great thanks to all reviewers for helpful comments toward improving our manuscripts. \nWe revised our manuscript based on the reviewer’s comments (including the ones that were visible to us by mistake because of the administrator’s technical problem) and uploaded the revision.\n\nFirstly, we conducted additional comparative study against orthonormal regularization, and showed the advantage of our algorithm over the orthonormal regularization. \n\nSecondly, we responded to the AnonReviewer2’s comment by running still another experiment with weight clipping (Arjovsky et al. 2017) and compared the results on CIFAR10 and STL10. \nWe confirmed that, as we have noted in Section 3, weight clipping also suffered from the rank degeneracy and its performance turned out to be much worse than our spectral normalization.\n\nThirdly, for the ImageNet, we re-calculated the inception scores for all methods using the original tensorflow implementation and replaced the scores on the table, because we were using Chainer instead of Tensorflow exclusively for the ImageNet results, and there were some numerical variations. The newly computed values do not affect any of our claims regarding the advantages and the superiority of our algorithm.\n", "Hi, thanks for your comment.\n\nWe will share the reproducing code after the acceptance notification.\nWe will announce the link to the code here when we make the code public.", "Please can you share the code to reproduce ILSVRC2012 imagenet results ?", "We need to remember that the paper you designated uses the classic loss function for both “generator” and discriminator updates. As we explain in the experiment section, we are using the modified generator updated rule proposed by Goodfellow et al (2014), which uses softplus function -log sigmoid(f(x)) = log (1 + exp(-f(x))) := softplus(-f(x)) in place of log (1- sigmoid(f(x))) so that one can maintain the learning process. Note that softplus(-f(x)) is approximately -f(x) when f(x) < 0 (In fact, on the bulk of the support of the generator, f(x) tends to be negative. ). Thus, the generator will be looking at the gradient of f(x) on the course of its training. As such, we need to keep our eyes on the gradient of f(x), which can blow up outside of the support of p or q (see Eq (4) ) without any gradient regularization as a countermeasure. \nSo far, this is our current postulate on the importance of Lipschitz constant in GAN. The gest of our paper is that, WGAN-GP and our spectral normalization can constrain the norm of the gradient so that this will not be a problem. \n \nAlso, we shall make it clear that our method is not designed specifically for the purpose of preventing mode-collapse. However, it is not hard to imagine that the control of the Lipschitz constant of the discriminator would prevent the training process of the generator to plateau prematurely because of the critical gradient problem we have described above. \n", "\nThank you so much for the review!\n\n>I also initially thought that the proposed \"spectral normalization \" is basically the same as \"spectral norm regularization\", but given the authors' feedback on this I think the differences should be made more explicit in the paper.\n\nThanks for the suggestion; we will emphasize the difference between spectral norm regularization and our spectral normalization in the revised manuscript.\n\n\nAnd thanks for pointing out the colloquialism, we will relax it :-)\n\n", "\nThank you so much for the review!\n\n\n>The authors observe that the product of spectral norm of gradients per each layer serves as a good approximation of the overall Lipschitz continuity of the entire discriminating network, and propose gradient based methods to optimize a \"spectrally normalized\" objective.\n\nThank you very much for the comments; however we would like to emphasize that we are controlling the spectral norm of the operators, not their gradient. Also, unlike what we refer to as “gradient penalty method”, we are not modifying the objective function in any means. We are still using the same objective function as the classic GAN; we are just looking for the candidate discriminator from the normalized set of functions.\n\n\n> 1. I think the authors should cite some more classical references in addition to (Yoshida and Miyato). \n\nThanks for the remark, and yes we should have cited some of the classic references. We will add them to the revised manuscripts.\n\n\n>2. I think the matrix spectral norm is not really differentiable; hence the gradients the authors calculate in the paper should really be subgradients. Please clarify this.\n\nIndeed, when the spectrum has multiplicities, we would be looking at subgradients, and technically we should have said so. However, the probability of this happening is zero (almost surely), and we assumed we can continue discussions without giving considerations to such events. We will make note of this fact in the revised version. Thanks!\n\n\n>3. Can the authors give some empirical evidence showing that this approximation is much better than previous approximations, such as L2 norms of gradient rows which appear to be much easier to optimize?\n\nWe would like to remind that the gest of our paper is not about the accuracy of the Lipschitz constant; we do not intend to claim that our spectral normalization better controls the Lipschitz constant than the gradient penalty method. \nAs we claim in Section 3, an advantage of our normalization over the gradient penalty based method (WGAN-GP) is that we can control the Lipschitz constant even outside the neighborhoods of the observed datapoint. \nFurthermore, spectral normalization can be carried out with less computational cost. Please see the discussions in the designated section for more detail.\n\n\n", "\nThank you so much for the review!\n\n>This paper borrows the classic idea of spectral regularization, recently applied to deep learning by Yoshida and Miyato (2017) and use it to normalize GAN objectives. \n\nThank you very much for the comments; we however would like to remind that the spectral normalization presented in this paper is very much different from spectral ‘norm’ regularization introduced in Yoshida and Miyato (2017). \nAlso, unlike what we refer to as “gradient penalty method”, we are not regularizing the objective function in any means, so “normalize GAN objectives“ is an inaccurate keyword for our paper. \nWe are still using the same objective function as the classic GAN; we are just looking for the candidate discriminator from the normalized set of functions.\nWe will emphasize these points in the revised manuscript, since these confusions seem to be recurring issues. \n\n\n>1. The authors claim that other methods, like (Arjovsky et al. 2017) also suffer from the same rank deficiency. I would like to see the same spectra included. \n\nThanks for the suggestion. We plan to test with the weight clipping method (Arjovsky et al. 2017) and report the results in the revised manuscript.\n\n\n>2. Continuing on the previous point: maybe there is another mechanism at play beyond just rank that give SN its apparent edge? One way to test the rank hypothesis and better explain this method is to run a couple of truncated-SN experiments. What happens if you run your SN but truncate its spectrum after every iteration in order to make it comparable to the rank of WN? Do you get comparable inception scores? Or does SN still win?\n\nThat sounds like a good suggestion. Should there be ample rooms in the computational resource and time, we might try the experiment with CIFAR 10. \n\n\n>3. Section 4 needs some careful editing for language and grammar.\nThanks, we will proofread the document once again. \n\n", "It seems to me that a method like infogan that enforces a multi-modal distribution on the latent variables and makes the generator produce samples with high mutual-information with them can potentially solve the mode collapse issue. I am not sure how practical it is to train infogan for 1000 classes though. Do you see any reason that it cannot maintain the same number of image modes (classes) as the latent variable modes?", " Dear authors,\n Your paper attracts my attention, because it has impressive results. \n May I know what is the motivation for Lipschitz constraint for GAN? For WGAN, the Lipschitz constraint is required due to the Wasserstein divergence metric. However, for the original GAN, I am not clear why the Lipschitz constraint is so helpful. According to my knowledge, one source of mode collapse is that the generated probability at some data points is very small (almost zero) and D(x) is locally constant (equal to 0) around those data points. Then the gradient of D(x) at those data points is zero, which prevents the update of the generator. Detailed description can be found in \"Towards Principled Methods for Training Generative Adversarial Networks\". In this case, even if D(x) satisfies the Lipschitz norm constraint, it still cannot avoid mode collapse. Would the author please clarify how spectral normalization helps avoid mode collapse?\n\n", "Because people don't usually release negative results, it's hard to know whether people have tried WGAN-GP on high-res ImageNet and it didn't work or whether no one has seriously tried it.\n\nI tried WGAN with weight clipping, not GP, on 128x128 ImageNet, by modifying the openai/improved-gan implementation of Minibatch (NS)GAN for the same dataset. WGAN with weight clipping didn't work very well for me on that task.\n\nThis recent work suggests that WGAN-GP would probably perform comparably to NS-GAN: https://arxiv.org/abs/1711.10337 ", "1)\nIndeed, u and v are both functions of W, and we technically have to backprop through these vectors as well. However, in our implementation, we ignored the dependency of u and v on W for the sake of computational efficiency, and we were still able to maintain the Lipschitz constraint.\nIn fact, to be on the safe side, we ran experiments with backprop on u and v as a separate experiment. We were not able to observe any notable improvement. \n\n2)\nTo make the long story short, sigma(W) and sigma(B) may and may not differ depending on the padding and stride size. We briefly discuss this matter on the second footnote in page 5. Let us elaborate on this a little further. For the sake of argument, let us assume that the input image is infinite dimensional in both directions. If the stride size is 1, the value on each output pixel will be computed from the outputs of exactly same number (say, m) of filter blocks. The same holds also when the stride size divides the dimension of the filter block. In such cases, sigma(W) and sigma(B) will be off by the root m, and the dominant vectors will be exactly same. \nWhen the stride size does not divide the dimension of the dimension of the filter block, however, there will be some output pixels that are computed from the outputs of more filter blocks than other. In such cases, the relationship between sigma(W) and sigma(B) appears complex; at least so complex that we decided not to elaborate further on our paper. \nFor our experiment, we made sure that the stride size divides the dimension of the filter block so that, even after taking the padding size into consideration, the dominant direction will not be too much off from what we mathematically intended. \n", "\n>I don't see where the square comes in. If you flatten $W$, it should be of shape $d_{out}d_{in}hw$, right? I am also interested to hear more about the semantics of the spectral norm of this object (flattened filterbank), which Ian asked about below.\n\nYes, it's a typo. We meant to write 2-D, not square. \nAs for the spectral norm of convolutional operator, please take a look at our response to Ian’s comment. \n\n>Relatedly, I think there is a typo in the caption of Table 6:\n\"we replaced the usual batch normalization layer in the ResBlock of the with the conditional batch normalization layer\"\n\nWe are sorry for the confusion, and you are correct about our typo in the caption of Table 6. We meant to write \n“we replaced the usual batch normalization layer in the ResBlock of the '''generator''' with the conditional batch normalization layer\". \nWe introduced the conditional batch normalization layer to the generators of ALL the GANs. \n", "Have there been any solid attempts at training Imagenet with WGAP-GP? This was not done in Gulrajani et al 2017, nor have I seen many recent attempts in the literature, even in the conditional setting. In my own experience I've found that the gradient norm regularization (https://arxiv.org/abs/1705.09367) seems to train reasonably well (no mode collapse, good diversity, some semblance of realistic samples, good quality as far as the usual features) with ResNets that usually fail with GANs, though I have to admit I'm in the middle of such an experiment and the samples still look quite strange at 20 epochs (like Dali paintings, this is taking weeks to train on one GPU).", "Hi, thanks for the paper, impressive results!\n\nI am confused about how you describe flattening the convolutional filterbank for computing the spectral norm. You wrote\n\"Also, for the evaluation of the spectral norm for the convolutional weight $W \\in R^{d_{out} \\times d_{in} \\times h \\times w}$, we treated the operator as a square matrix of dimension $d_{out} \\times (d_{in}hw)^2$.\"\nI don't see where the square comes in. If you flatten $W$, it should be of shape $d_{out}d_{in}hw$, right? I am also interested to hear more about the semantics of the spectral norm of this object (flattened filterbank), which Ian asked about below.\n\nAlso, a separate question - for your imagenet experiments, did all of the GAN variants you report (no normalization, layer normalization, spectral normalization) have conditional batch norm? Or just the SN-GAN? Relatedly, I think there is a typo in the caption of Table 6:\n\"we replaced the usual batch normalization layer in the ResBlock of the with the conditional batch normalization layer\"\n\nThanks for any responses.", "Thanks! Yes, I see from Appendix C1 that you're finding a good approximation of the spectral norm. I was asking these questions so I can re-implement it successfully myself, not because I doubt you're finding the spectral norm.\n\nTwo follow up questions:\n1)\nIf the spectral norm\nsigma(W) = u^T W v\nthen to estimate the derivatives of sigma(W) with respect to W, don't you need to backprop through u and v too? u and v are both functions of W.\nOr is the estimate still useful somehow when u and v are constant?\n(Either way, you successfully maintain the spectral norm constraint, but learning would be faster if you get the gradient of sigma(W) correct because this means your gradient of the cost function will be tangent to the constraint region and prevent you from wasting time moving in forbidden directions)\n\n2)\nFor convolution, you have a kernel K that can be reshaped to a matrix W but convolving the input actually uses a different matrix B. B is a big doubly block circulant matrix where the number of rows is equal to the number of pixels in the input image.\n\nsigma(B) is max_{image, subject to l2_norm(image)=1} l2_norm(conv(image, K)).\n\nsigma(W) is max_{x, subject to l2_norm(x)=1} l2_norm(Wx).\n\nDo you know if sigma(B) and sigma(W) are approximately the same as each other?\nI haven't thought it through and don't actually know the answer.\n\nTo bound the Lipschitz constant of the neural net, you want to constrain sigma(B), but in your experiments you constrained sigma(W).\n\nI'm guessing that maybe sigma(B) and sigma(W) are related by a constant factor, so you probably are still constraining the Lipschitz constant of the whole net, but maybe constraining it to a different value than you thought.\n\n", "Thanks for the comments and remarks!\nLet me try to resolve the concerns one by one. \n\n>I'm not sure which time step I'm meant to take u and v from when computing the spectral norm. Here I chose to use the *new* value of both u and v, so that get u^T W v for free when I compute the normalizing constant for the new value of u.\n\nI am not sure if I am understanding the question clearly, but at each forward propagation, we prepare new u and v from the same set of u. \nBy the way, we would like to note that we didn't propagate gradients thorough new_u and new_v .\nIf we write our code in Tensorflow, our implementation is like:\n new_v = tf.nn.l2_normalize(tf.matmul(self.u, W), 1)\n new_u = tf.nn.l2_normalize(tf.matmul(new_v, tf.transpose(W)), 1)\n new_u = tf.stop_gradient(new_u)\n new_v = tf.stop_gradient(new_v)\n spectral_norm = tf.reduce_sum(new_u * tf.transpose(tf.matmul(W, tf.transpos e(new_v))), 1)\n power_method_update = tf.assign(self.u, new_u)\n\n>For convolution, I *think* I'm meant to use convolution and convolution transpose on a 4-D tensor, based on the comment in the paper about the sparse matrix, but I wasn't totally sure if I should do this or reshape the kernels into a matrix and use matrix-vector products.\n\nIn our implementations, we reshaped the 4D convolutional kernel into a 2-D matrix for the computation of the spectral norm. So, to be honest, our “spectral norm” does not include the parameters like padding and stride size. We did away with these parameters just for the ease of computation. So far however, this way is yielding satisfactory results.\n\nYour implementation is mathematically more faithful to our theoretical statement in that it is approximating the honest-to-goodness operator norm of the convolutional operator that includes these parameters. We cannot say for 100% sure, but your speculate your way of computation shall work just fine.\n\n>I'm not 100% sure when I'm meant to run the `power_method_update` op. Should I just run this once per gradient step or do I need to run it several times to get u close to optimal before I start running SGD?\n\nIn our experiment, we applied the power method update operation only one time per gradient step. It turned out that one power iteration was enough. \nTo check how good we are doing with one application of the power method, we used SVD to compute the spectral norm of the convolution kernel normalized with our method (AppendixC.1) Note that our method is doing just fine with one power method. \n", "I want to check that I understand how to implement the convolutional version of your spectral norm approximation correctly.\n\nI make u be a convolutional input tensor that contains only one example:\n\nsingle_input_shape = [1, rows, cols, input_channels]\nself.single_input_shape = single_input_shape\ninit_u = tf.random_normal(single_input_shape, dtype=tf.float32)\ninit_u = init_u / tf.sqrt(1e-7 + tf.reduce_sum(tf.square(init_u)))\nself.u = tf.Variable(init_u, trainable=False)\n\n\nThen on every iteration of SGD I do these updates:\n\n new_v = self.conv(self.u, self.kernels)\n new_v = new_v / tf.sqrt(1e-7 + tf.reduce_sum(tf.square(new_v)))\n\n new_u = self.conv_t(new_v, self.kernels)\n # u^T W v = (W v / l2_norm(Wv))^T Wv = l2_norm(Wv) = l2_norm(new_u)\n spectral_norm = tf.sqrt(1e-7 + tf.reduce_sum(tf.square(new_u)))\n new_u = new_u / spectral_norm\n\n power_method_update = tf.assign(self.u, new_u)\n\n\nI ask because there are a few subtle things:\n- I'm not sure which time step I'm meant to take u and v from when computing the spectral norm. Here I chose to use the *new* value of both u and v, so that get u^T W v for free when I compute the normalizing constant for the new value of u.\n- For convolution, I *think* I'm meant to use convolution and convolution transpose on a 4-D tensor, based on the comment in the paper about the sparse matrix, but I wasn't totally sure if I should do this or reshape the kernels into a matrix and use matrix-vector products.\n- I'm not 100% sure when I'm meant to run the `power_method_update` op. Should I just run this once per gradient step or do I need to run it several times to get u close to optimal before I start running SGD?\n\nThanks, and sorry if this is in the paper and I've missed it.", "The main *symptom* is mode collapse.\n\nWhen I was working on this paper ( https://arxiv.org/abs/1606.03498 ) I was able to get GANs to draw dogs occasionally. Sometimes I could get them to draw a different class but I never got them to draw more than one class at a time.\n\nAugustus did some good experiments while working on AC-GAN, where he uses MS-SSIM to measure mode collapse. He found that increasing the number of classes causes collapse. https://arxiv.org/abs/1610.09585\n\nOf course, these are *symptoms*. We don't know a lot about the *cause*. There has been a lot of work over the past few years suggesting that the cause is optimizing the wrong loss (f-GAN, WGAN, both kinds of LS-GAN, etc. have proposed new losses). There has also been a lot of work suggesting the cause is that the learning algorithm fails to equilibrate the game or does so extremely inefficiently ( https://arxiv.org/abs/1412.6515 https://arxiv.org/abs/1701.00160 https://arxiv.org/abs/1706.04156 https://arxiv.org/abs/1706.08500 etc). Finally, at the MILA summer school this year, I said that I think the model family could bias the learning algorithm toward mode collapse. The success of SN-GAN in this submission seems to be evidence in favor of the 2nd or 3rd hypothesis about the cause.", "This may be a naive question, but can someone explain to me why scaling to ILSVRC2012 dataset is more than a computation problem? Is it because of the instability so that few realistic images will be generated or training progresses real slow? Or is it because mode collapsing so that less diverse set of realistic images will be generated? Or something else?", "This is a great paper! I don't think this paper explains the importance of its results nearly enough and I'm concerned that it may not be obvious what a breakthrough it is just from skimming the abstract.\n\n\"We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques\" is a major understatement. This paper represents an extraordinary advance on the ILSVRC2012 dataset.\n\nBefore this paper, there was only one GAN that worked very well at all on ILSVRC2012: AC-GAN. AC-GAN was sort of cheating because it divided ImageNet into 100 smaller datasets that each contained only 10 classes. The new SN-GAN is the first GAN to ever fit all 1000 ImageNet classes in one GAN.\n\nScaling GANs to a high amount of classes has been a major open challenge and this paper has achieved an amazing 10X leap forward." ]
[ 7, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1QRgziT-", "iclr_2018_B1QRgziT-", "iclr_2018_B1QRgziT-", "iclr_2018_B1QRgziT-", "HJxGRNvMz", "iclr_2018_B1QRgziT-", "BkOctTAGf", "iclr_2018_B1QRgziT-", "SJok1XB-f", "H1xyfspez", "HJH-EWkWM", "SkQdbLclM", "rJrC3dhlz", "iclr_2018_B1QRgziT-", "S1g_eb9gz", "Hkgbu7Qgz", "ryjuZSQlG", "Hkdca0ZlG", "iclr_2018_B1QRgziT-", "SJmRwz7xG", "HJdKxkGxf", "iclr_2018_B1QRgziT-", "B1w_vpZef", "r12BeAgeM", "iclr_2018_B1QRgziT-" ]
iclr_2018_BJOFETxR-
Learning to Represent Programs with Graphs
Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases. Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects.
accepted-oral-papers
There was some debate between the authors and an anonymous commentator on this paper. The feeling of the commentator was that existing work (mostly from the PL community) was not compared to appropriately and, in fact, performs better than this approach. The authors point out that their evaluation is hard to compare directly but that they disagreed with the assessment. They modified their texts to accommodate some of the commentator's concerns; agreed to disagree on others; and promised a fuller comparison to other work in the future. I largely agree with the authors here and think this is a good and worthwhile paper for its approach. PROS: 1. well written 2. good ablation study 3. good evaluation including real bugs identified in real software projects 4. practical for real world usage CONS: 1. perhaps not well compared to existing PL literature or on existing datasets from that community 2. the architecture (GGNN) is not a novel contribution
train
[ "ryuuTE9gG", "rkhdDBalz", "H1oEvnkWM", "SyXFuWT7M", "Hy2ZoJhbf", "B1LIjy2Zz", "H1UNdg8-G", "S1SRPhSbM", "B1ioD3SZG", "SkRKw2SWz", "BJNSv3SWf", "Hy-fzXEZM", "Hy3kAkmZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "author", "author", "author", "author", "public" ]
[ "Summary: The paper applies graph convolutions with deep neural networks to the problem of \"variable misuse\" (putting the wrong variable name in a program statement) in graphs created deterministically from source code. Graph structure is determined by program abstract syntax tree (AST) and next-token edges, as well as variable/function name identity, assignment and other deterministic semantic relations. Initial node embedding comes from both type and tokenized name information. Gated Graph Neural Networks (GGNNs, trained by maximum likelihood objective) are then run for 8 iterations at test time.\n\nThe evaluation is extensive and mostly very good. Substantial data set of 29m lines of code. Reasonable baselines. Nice ablation studies. I would have liked to see separate precision and recall rather than accuracy. The current 82.1% accuracy is nice to see, but if 18% of my program variables were erroneously flagged as errors, the tool would be useless. I'd like to know if you can tune the threshold to get a precision/recall tradeoff that has very few false warnings, but still catches some errors.\n\nNice work creating an implementation of fast GGNNs with large diverse graphs. Glad to see that the code will be released. Great to see that the method is fast---it seems fast enough to use in practice in a real IDE.\n\nThe model (GGNN) is not particularly novel, but I'm not much bothered by that. I'm very happy to see good application papers at ICLR. I agree with your pair of sentences in the conclusion: \"Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning. It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-defined and rich additional information can be extracted using well-known, efficient program analyses.\" I'd like to see work in this area encouraged. So I recommend acceptance. If it had better (e.g. ROC curve) evaluation and some modeling novelty, I would rate it higher still.\n\nSmall notes:\nThe paper uses the term \"data flow structure\" without defining it.\nYour data set consisted of C# code. Perhaps future work will see if the results are much different in other languages.\n", "The paper introduces an application of Graph Neural Networks (Li's Gated Graph Neural Nets, GGNNs, specifically) for reasoning about programs and programming. The core idea is to represent a program as a graph that a GGNN can take as input, and train the GGNN to make token-level predictions that depend on the semantic context. The two experimental tasks were: 1) identifying variable (mis)use, ie. identifying bugs in programs where the wrong variable is used, and 2) predicting a variable's name by consider its semantic context.\n\nThe paper is generally well written, easy to read and understand, and the results are compelling. The proposed GGNN approach outperforms (bi-)LSTMs on both tasks. Because the tasks are not widely explored in the literature, it could be difficult to know how crucial exploiting graphically structured information is, so the authors performed several ablation studies to analyze this out. Those results show that as structural information is removed, the GGNN's performance diminishes, as expected. As a demonstration of the usefulness of their approach, the authors ran their model on an unnamed open-source project and claimed to find several bugs, at least one of which potentially reduced memory performance.\n\nOverall the work is important, original, well-executed, and should open new directions for deep learning in program analysis. I recommend it be accepted.", "This paper presents a novel application of machine learning using Graph NN's on ASTs to identify incorrect variable usage and predict variable names in context. It is evaluated on a corpus of 29M SLOC, which is a substantial strength of the paper.\n\nThe paper is to be commended for the following aspects:\n1) Detailed description of GGNNs and their comparison to LSTMs\n2) The inclusion of ablation studies to strengthen the analysis of the proposed technique\n3) Validation on real-world software data\n4) The performance of the technique is reasonable enough to actually be used.\n\nIn reviewing the paper the following questions come to mind:\n1) Is the false positive rate too high to be practical? How should this be tuned so developers would want to use the tool?\n2) How does the approach generalize to other languages? (Presumably well, but something to consider for future work.)\n\nDespite these questions, though, this paper is a nice addition to deep learning applications on software data and I believe it should be accepted.\n\n", "We did not finish the work on an improved data generation procedure yet and thus cannot provide updated experimental results before the end of the official rebuttal phase. We will continue this work and provide updated results here as soon as feasible.", "This response is split into two posts to work around an OpenReview limitation.\n\n| prior works achieve significantly higher accuracy\n\nAs noted before, prior works on variable renaming achieve higher accuracy on a different (but related) task and dataset. The paper you mention as [1] performs renaming for everything but local variables (whereas we only consider local variables) on Android applications and is specialized for the setting where there is a uniform API on a single set of libraries (core Java, Android). The authors of that paper do not evaluate this on a diverse corpus of Java code, or claim that its results are generally applicable.\n\nThe results of your reference [2] are more relevant (as they also consider local variables), but consider JavaScript and also rename other identifiers. While their renaming accuracy is reported as 63.4%, they also note that not renaming already yields an accuracy of 25.3% (i.e., (63.4 – 25.3) = 38.1% would be a rough estimate for a result in a more comparable setting, where no original names are available at all). Finally, we note that a recent study of JavaScript code on GitHub shows massive file duplication across JavaScript repositories [8] (“only 6% of files are distinct”), and thus the per-project de-duplication method used to obtain the dataset for [2] is likely to have led to a noticeable overlap between training and test data. Note that [8] also considers Java (which is known to have characteristics similar to C#), where 60% of files are distinct, and thus we speculate that our C# dataset would show less file duplication.\n\nOverall, we see no indication to conclude decisively that our method or [1,2] perform better on the renaming task, and thus suggest that the two should simply be compared on the exact same task and dataset in the future. We will make sure to clarify the paper to not claim that our approach is more accurate than [1,2] or successor works. However, as we included VarRenaming in our submission mainly to show our that our method for program representation learning is applicable to more than one task, we do not feel that such a comparative study is in scope for this submission.\n\n| a harder task (predicting all variables jointly)\n\nThis difference among predicting one variable vs many is orthogonal to the problem of learning program representations, the concern of our work. Most machine learning models that locally predict a score for a single element given some context, including the one in this paper, can be reused within structured prediction models. You can think of any such model as a factor in a CRF/Markov network that performs structured prediction over multiple elements (e.g. variable names). \n\nThank you for pointing out “Bugs as Deviant Behaviors”, which seems to counter your original argument that the “formulation purely based on a probabilistic model is problematic”. Unlike that work and its successor works, our method does not use hard-coded rule templates and statistical methods on their occurrence count, but instead aims to automatically learn _general_ code patterns by looking at the raw representation of code, similar to the way deep learning models have done for computer vision. We never claim that the general idea of using data mining methods to detect bugs is something that we are the first to think about.\n\nReferences:\n[7] Andrew Rice, Edward Aftandilian, Ciera Jaspan, Emily Johnston, Michael Pradel, and Yulissa Arroyo-Paredes. 2017. Detecting argument selection defects. OOPSLA’17.\n[8] Cristina V. Lopes, Petr Maj, Pedro Martins, Vaibhav Saini, Di Yang, Jakub Zitny, Hitesh Sajnani, and Jan Vitek. 2017. DéjàVu: a map of code duplicates on GitHub. OOPSLA’17.", "This response is split over two posts to work around an OpenReview limitation.\n\n| a) Given the many prior works in this space, stating your work is “first to use\n| source code semantics” which “allows us to solve tasks that are beyond the\n| current state of the art” is incorrect and misleading.\n\nWe fully agree with you that our work is not the first to use source code semantics. Because of that, this “quote” does not appear in our submission.\n\nRegarding the second part of your comment, we note that we are not aware of any other machine learning model that attempts to solve a task comparable to the VarMisuse task, nor do you mention any existing publications tackling the VarMisuse task. Of course, there are static methods that can detect some specific kinds of variable misuses (e.g. [7]), but we are not aware of an available system that handles the general case. Our core contribution and novelty lies in automatically learning rich semantic/structural patterns of variable usage to detect variable misuses. As we are not aware of any other work on this topic, we believe that this task is beyond the current abilities to the state-of-the-art in machine learning. In our submission, we have compared to two reasonable baselines (i.e., based on neural code models with possibly some limited access to data flow) to show that current related methods are not handling this task well.\n\n| For VarMisuse, a trivial baseline is to use any existing probabilistic model over code\n\nOur experiments are exactly that: Adaptations of state-of-the-art deep learning models of code. For example, our Loc baseline represents the state-of-the-art neural language models, but also uses the succeeding tokens (which are not available in normal generative models), and adapts the language model task by only asking it to rank in-scope and type-correct identifier candidates. The AvgBiRNN baseline advances this by also providing the model access to other relevant information about these identifier candidates", "Thanks for the response. The concerns on incorrect claims and novelty still remain though:\n\na) Given the many prior works in this space, stating your work is “first to use source code semantics” which “allows us to solve tasks that are beyond the current state of the art” is incorrect and misleading. \n\nb) The work also does not show that it can solve tasks beyond current state-of-the-art. It is also incorrect to claim that existing models “miss out on the opportunity to use program semantics”. \n\nc) Selecting a new dataset is hardly a reason to claim a state-of-the-art model, especially since prior works achieve significantly higher accuracy and both the code and dataset are public). For VarNaming, it is unclear why your proposed model would achieve better results than state-of-the-art evaluated both on JavaScript and Java programs that considers a harder task (predicting all variables jointly) and ensures that the renaming is semantically preserving.\n\nd) For VarMisuse, a trivial baseline is to use any existing probabilistic model over code (of which several have been developed and published) to find unlikely variable names (or any other program elements). Without including such results it is not possible to assess if the proposed method makes sense for this task. Do note that additionally there is work in both static and dynamic program analysis and testing on statistical anomaly detection (e.g., starting from “Bugs as Deviant Behaviors”, 2001 and more). The authors seem to be unaware of this body of work.\n\nI believe the paper has to remove the incorrect claims or substantiate them. As to the proposed model itself, to be technically accurate, the work can claim extending GGNNs with semantic information evaluated on two tasks. Whether such model makes sense for these tasks is however not justified experimentally. \n", "Thank you for reviewing our work so kindly. Please note that the evaluation in our submission only covers 2.9M SLOC (not 29M), even though since the submission we have performed additional experiments with similar results on the Roslyn project (~2M SLOC).\n\nWe have just updated our submission to also include ROC and PR curves for our main model in the appendix, which show that for a false positive rate of 10%, our model achieves a true positive rate of 73% on the SeenTestProj dataset and 69% on UnseenTestProj. The PR curve indicates that setting a high certainty threshold for such highlighting should yield relatively few false positives. We are working on further improving these numbers by addressing common causes of mistakes (i.e., our model often proposes to use a class field “_field” when the ground truth is the corresponding getter property “Field”; a simple alias analysis can take care of these case).\n", "Thank you for your kind review. We have updated our paper to discuss bugs found by the model in more detail and have privately reported more bugs found in Roslyn to the developers (cf. https://github.com/dotnet/roslyn/pull/23437, and note that this GitHub issue does not de-anonymize the paper authors).", "Thank you for reviewing our work so kindly. Please note that the evaluation in our submission only covers 2.9M SLOC (not 29M), even though we have performed additional experiments with similar results on the Roslyn project (~2M SLOC).\n\nRegarding your first question: We have just updated our submission to also include ROC and PR curves for our main model in the appendix, which show that for a false positive rate of 10%, our model achieves a true positive rate of 73% on the SeenTestProj dataset and 69% on UnseenTestProj. We expect our system to be most useful in a code review setting, where locations in which the model disagrees with the ground truth are highlighted for a reviewer. The PR curve indicates that setting a high certainty threshold for such highlighting should yield relatively few false positives.\n\nRegarding your second question: We have not tested our model on other languages so far. However, we expect similar performance on other strongly typed languages such as Java. An interesting research question will be to explore how the model could be adapted to gradually typed (e.g. TypeScript) or untyped (e.g. JavaScript or Python) languages.\n", "We have updated our submission to address some of the comments raised here and to include updated results obtained after the deadline:\n- We have included a reference to Bichsel et al. (CCS 2016) and improved \n the wording in the related work section to better describe their approach.\n- Initial node representations are now computed by a linear layer taking the\n concatenation of node label embeddings and type representation as input. \n In our experiments, we found this to help with generalization performance.\n- We noticed and resolved an issue with the local model (LOC) implementation\n on the VarMisuse task and updated Tab. 1 to reflect the results. They are now\n substantially better, but the model still performs much worse than all other\n models. \n- We have updated experimental results for our main GGNN model on the\n VarMisuse task to reflect the small model changes and better tuning of\n hyperparameters, mainly improving the generalization of GGNNs to unseen\n projects (jumping from 68.6% accuracy to 77.9%). We have not updated the\n results for all models in the ablation experiments (Table 2) but will do so in a\n second update to the submission.\n- We have included figures with the ROC and PR curves for our experiments in\n the appendix. The key number, requested by the reviewers, is that for the\n widely accepted false positive rate of 10% our model achieves a true positive\n rate of 73%.\n- We have updated the paper to briefly discuss 3 more bugs found in Roslyn, \n one with the potential to crash Visual Studio (cf.\n https://github.com/dotnet/roslyn/pull/23437, and note that this GitHub issue\n does not de-anonymize the paper authors).\n\nWe are currently working on an extension of the graph representation of programs that takes conditional dependencies into account (i.e., “variable x is guarded by x != null”) and a refined experimental setup for the VarMisuse task that makes use of an aliasing analysis to further filter the set of candidate variables for each slot. We will rerun all experiments once these changes are finished.\n", "Thank you for reading our work and for your comments. First, let us point out that some of the papers you note as missing are discussed in our submission (namely, [2,3,6] in your notation, which we felt to be the most influential contributions in the field). Due to the page size limit, we had to make hard decisions which related work to highlight. We understand that your opinion here differs, and we will try to take it into account when preparing future versions. We refer readers interested in the overall field to the https://ml4code.github.io effort, whose focus is a literature review.\n\nIn regard to the individual points you raised:\n\n1, 5) We do not compare directly to [1] because our VarRename task focuses on names of local variables in general C# applications, and thus our toolchain is not able to infer names for classes and packages in Android applications at this time. However, let us note that even rough comparisons across different datasets for this task are practically impossible: In internal tests with the variable naming task, we found the accuracy of the same model to vary between ~15% and ~65% on datasets extracted from different projects. Finally, we consider the naming of local variables, whereas the 79.1% accuracy you refer to considers fields, methods, classes and packages (but no local variables). We have reason to believe that the task on the Android App dataset is on the “easier” end of the spectrum, as it comes for a single domain with highly specific APIs, idioms and domain-specific vocabulary, whereas our dataset comes from a highly diverse set of projects including everything from algorithmic trading code to code injection libraries.\n\nOverall, we believe that your “4x worse accuracy” claim is invalid, as it relates results on different tasks on different datasets. However, we will adapt our submission to refer to [1] to note its results on a related task.\n\n2) We believe that our model offers substantial novelty in the integration of semantic program information and deep learning methods. We are obviously not the first to leverage semantics in program analysis, and we will clarify the sentence in the introduction to refer to “existing deep learning models”. However, note that your comparison to [6] is indeed highlighting the argument at the core of our contribution: Requiring a model to learn well-defined and known relationships should be avoided. Instead, we propose a model structure that allows us to easily insert additional semantic information (by extending the number of relationships in the graph) while still leveraging deep learning methods that can find patterns in information that is hard to deterministically interpret (such as names, ordering, …)\n\n3, 4) We agree that using program semantics in machine learning models is not an entirely new insight, as we briefly discuss in Sect. 2 in relation to Raychev et al (2015) ([2] in your notation). You refer to dataflow information in [1,2], but [2] does not describe the relationship between elements in any detail (and “flow”, “read” and “write” only appear in different contexts in the paper), and while [1] indeed discusses read-before/wrote-before relationships, they are established between fields, and thus do not establish how data flows, but just an order on identifiers. While you may be correct in saying that [1,2] indeed leverage more dataflow information to obtain their impressive results, this is not discussed in the papers.\n\nAgain, we feel our contribution is a method that allows us to apply deep learning on a combination of token-level structure, syntax tree structure, data and control flow information, full type lattice data and formal parameter resolution results. \n\n6) We agree that the VarMisuse task does not induce a notion of “buggy code”, but only of “(very) unusual code”. As most practical code has no formal specification, we feel that the notion of unusual semantics, i.e. places where the code’s semantics differs from conventional semantics, is the only practical one. Thus, we see our method as inferring a prior that can guide human code reviewers (or other program analyses, as you propose). Of course, no non-trivial method for detecting bugs can achieve 100% accuracy; standard programming language methods aim for soundness (i.e. 100% precision, but low recall). Our method is no different; for a given fpr our model will achieve some tpr. As in all program analysis tools the developer will have to filter out imprecisions of the analysis, as it is commonly accepted in the software engineering literature. Finally, and most importantly: the fact that our method has detected real-life bugs, including 3 more bugs in a widely released compiler framework since we submitted this paper, attests to the empirical validity of our method, the practical impact it already has, and the potential impact of those models.\n", "While the overall direction is promising, there are several serious issues with this paper which affect the novelty and validity of its results:\n\n1) It achieves significantly worse results than state-of-the-art without comparing to it.\n\nFor the variable naming task, the state-of-the-art approach achieves 79.1% for Obfuscated Android applications [1] (source code available online at http://nice2predict.org/). In comparison, this work achieves accuracy 19.3% which is 4x lower. These results are however not even mentioned in the paper. Worse, prior work considers a more difficult task in which all program identifiers are initially unknown. In contrast, the task considered here renames each variable separately, while knowing the correct names of all other variables.\n\nIncorrect claims made in the paper:\n\n2) [Introduction] Existing models of source code capture its shallow, textual structure while this work is the first to use source code semantics by incorporating data-flow and type hierarchy information.\n\nThis is not true. The whole point of many recent works is exactly to not learn over shallow syntactic representations but to leverage semantic information. For example, [1][2] introduce semantic relations between program elements (including data-flow, e.g., initialized-by, read-before, wrote-before, and others), [3,8] use semantic analysis to extract sequences of method calls on a given object, [4,5] use both structural dependencies extracted from AST and data dependencies computed via semantic analysis, graph based approaches such as [7], etc. [6] even tries to learn such semantic dependencies automatically instead of providing it by hand as part of the model. \n\n3) [Related Work] We are not aware of any model that does use data flow information.\n\nSee above -- many works in this domain use data flow information. \n\n4) [Introduction] Our key insight is that exposing these semantics explicitly as structured input to a machine learning model lessens the requirements on amounts of training data, model capacity and training regime.\n\nThis is not a new insight and has been done before across various applications in modeling source code. For example, in [3] the authors quantify the information provided by alias analysis enables the model to be learned with 10x less data while achieving the same accuracy (for API completion task). Similarly, in [8].\n\n5) [Introduction] Exposing these semantics explicitly as structured input allows us to solve tasks that are beyond the current state of the art.\n\nThis is again not true. Quite the opposite, the model presented here has 4x worse accuracy than state-of-the-art for variable renaming. \n\nComments:\n6) Variable misuse task problem formulation is problematic.\n\nThe formulation purely based on a probabilistic model is problematic — a probabilistic model cannot differentiate between code that is wrong from code that is rare or simply less likely (which is what the proposed model does). As a result, even if a probabilistic model is trained on a corpus that has no bugs, it will not achieve 100% prediction accuracy. For any large codebase such model will inherently report a large amount of benign warnings even if the model is 99% accurate. This is the reason why existing bug finding tools use additional forms of specification extracted either from language semantics (e.g., null pointer checks, out of bounds checks, type safety) or provided by a user (pre/post conditions and invariants). It would be interesting to see if having a prior over possible bug locations (which is what the probabilistic model computes) can help these existing techniques to work more efficiently, but this is not discussed in the paper.\n\nReferences:\n[1] Statistical Deobfuscation of Android Applications. Bichsel et.al., ACM CCS'16\n[2] Predicting Program Properties from \"Big Code\". Raychev et.al., ACM POPL'15 \n[3] Code Completion with Statistical Language Models. Raychev. et. al., ACM PLDI'14\n[4] A statistical semantic language model for source code. Nguyen et al. ACM ESEC/FSE'13\n[5] Using web corpus statistics for program analysis. Hsiao et. al. ACM OOPSLA'14\n[6] Program Synthesis for Character Level Language Modeling, Bielik et. al., ICLR'17\n[7] Graph-Based Statistical Language Model for Code. Ngyuen et. al. ICSE'15\n[8] Estimating Types in Binaries using Predictive Modeling, Kata et. al., ACM POPL'16" ]
[ 8, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJOFETxR-", "iclr_2018_BJOFETxR-", "iclr_2018_BJOFETxR-", "BJNSv3SWf", "H1UNdg8-G", "H1UNdg8-G", "Hy-fzXEZM", "ryuuTE9gG", "rkhdDBalz", "H1oEvnkWM", "iclr_2018_BJOFETxR-", "Hy3kAkmZM", "iclr_2018_BJOFETxR-" ]
iclr_2018_B1gJ1L2aW
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called `adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets. Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
accepted-oral-papers
The paper characterizes the latent space of adversarial examples and introduces the concept of local intrinsic dimenstionality (LID). LID can be used to detect adversaries as well build better attacks as it characterizes the space in which DNNs might be vulnerable. The experiments strongly support their claim.
val
[ "rkARQJwez", "H1wVDrtgM", "S1tVnWqxM", "rJLdp2jfz", "ryeCOusGf", "rka1XhcGf", "HyZMDhcfG", "ByIbHZKGG", "Hk9lgX_Mz", "SkjsvgdMG", "ByhCuVUff", "r1BGcmzzG", "HJFaKQGfG", "rJ1LK7zfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "public", "author", "public", "author", "author", "author" ]
[ "The paper considers a problem of adversarial examples applied to the deep neural networks. The authors conjecture that the intrinsic dimensionality of the local neighbourhood of adversarial examples significantly differs from the one of normal (or noisy) examples. More precisely, the adversarial examples are expected to have intrinsic dimensionality much higher than the normal points (see Section 4). Based on this observation they propose to use the intrinsic dimensionality as a way to separate adversarial examples from the normal (and noisy) ones during the test time. In other words, the paper proposes a particular approach for the adversarial defence.\n\nIt turns out that there is a well-studied concept in the literature capturing the desired intrinsic dimensionality: it is called the local intrinsic dimensionality (LID, Definition 1) . Moreover, there is a known empirical estimator of LID, based on the k-nearest neighbours. The authors propose to use this estimator in computing the intrinsic dimensionalities for the test time examples. For every test-time example X the resulting Algorithm 1 computes LID estimates of X activations computed for all intermediate layer of DNN. These values are finally used as features in classifying adversarial examples from normal and noisy ones. \n\nThe authors empirically evaluate the proposed technique across multiple state-of-the art adversarial attacks, 3 datasets (MNIST, CIFAR10, and SVHN) and compare their novel adversarial detection technique to 2 other ones recently reported in the literature. The experiments support the conjecture mentioned above and show that the proposed technique *significantly* improves the detection accuracy compared to 2 other methods across all attacks and datasets (see Table 1).\n\nInterestingly, the authors also test whether adversarial attacks can bypass LID-based detection methods by incorporating LID in their design. Preliminary results show that even in this case the proposed method manages to detect adversarial examples most of the time. In other words, the proposed technique is rather stable and can not be easily exploited.\n\nI really enjoyed reading this paper. All the statements are very clear, the structure is transparent and easy to follow. The writing is excellent. I found only one typo (page 8, \"We also NOTE that...\"), otherwise I don't actually have any comments on the text.\n\nUnfortunately, I am not an expert in the particular field of adversarial examples, and can not properly assess the conceptual novelty of the proposed method. However, it seems that it is indeed novel and given rather convincing empirical justifications, I would recommend to accept the paper. \n", "This paper tried to analyze the subspaces of the adversarial examples neighborhood. More specifically, the authors used Local Intrinsic Dimensionality to analyze the intrinsic dimensional property of the subspaces. The characteristics and theoretical analysis of the proposed method are discussed and explained. This paper helps others to better understand the vulnerabilities of DNNs.", "The authors clearly describe the problem being addressed in the manuscript and motivate their solution very clearly. The proposed solution seems very intuitive and the empirical evaluations demonstrates its utility. My main concern is the underlying assumption (if I understand correctly) that the adversarial attack technique that the detector has to handle needs to be available at the training time of the detector. Especially since the empirical evaluations are designed in such a way where the training and test data for the detector are perturbed with the same attack technique. However, this does not invalidate the contributions of this manuscript.\n\nSpecific comments/questions:\n- (Minor) Page 3, Eq 1: I think the expansion dimension cares more about the probability mass in the volume rather than the volume itself even in the Euclidean setting.\n- Section 4: The different pieces of the problem (estimation, intuition for adversarial subspaces, efficiency) are very well described.\n- Alg 1, L3: Is this where the normal exmaples are converted to adversarial examples using some attack technique? \n- Alg 1, L12: Is LID_norm computed using a leave-one-out estimate? Otherwise, r_1(.) for each point is 0, leading to a somewhat \"under-estimate\" of the true LID of the normal points in the training set. I understand that it is not an issue in the test set.\n- Section 4 and Alg 1: S we do not really care about the \"labels/targets\" of the examples. All examples in the dataset are considered \"normal\" to start with. Is this assuming that the \"initial training set\" which is used to obtain the \"pre-trained DNN\" free of adversarial examples?\n- Section 5, Experimental Setup: Seems like normal points in the test set would get lesser values if we are not doing the \"leave-one-out\" version of the estimation.\n- Section 5: The authors have done a great job at evaluating every aspect of the proposed method.\n", "Thanks for your question. \nOn the CIFAR-10 dataset against the Opt L2 attack, our LID-based detector achieved: AUC: 98.94%, Accuracy: 95.49%, Precision: 92.98%, Recall: 93.54% (AUC score was reported in Table 1). ", "Thanks for your responses. They do shed some light on the results. More questions out of curiosity: What is the recall rate? That is, what fraction of test/train examples are detected as adversarial when you learn from the mini-batch and generalize? I suspect that from the plots you've shown, most test examples are identified as non-adversarial but it looks like there are some adversarials that are very close to the tests (or even lower LID than some of the tests). If you try to get these removed, maybe you end up removing some tests. I'm wondering what the test-set rejection rate would be corresponding to the numbers in the table....", "Good question. In Section \"Robustness to Adaptive Attack\", we have shown that simply attacking the LID score (towards decreasing the LID scores of adversarial examples) is not effective. Attacking the logistic regression model itself can be expected to have similar results if directly integrate the detection model into the adversarial objective as we did for the LID score. In this paper, we show that adversarial examples tend to transit from a low dimensional submanifold to a more \"complex\" submanifold. A more interesting question is to what extent an attack strategy relies on such transition to find a valid solution. Our proposed LID is not a perfect solution for adversarial detection, after all, it is not 100% accurate in the detection of adversarial examples. In the future, we will investigate more forms of adapted Opt attacks against our LID detector and develop a more in-depth understanding of the detected/escaped adversarial examples. We would very much like to see Nicholas's response to this.", "Agree. We do find some interesting results and will address them in the next version.", "Do you have insights on what might happen if the adversary actually has full knowledge of the logistic regression model trained to distinguish based on the LID score (or he/she could train one himself/herself, too)?", "I agree with the comment that it looks like there is something interesting going on here (it's not immediately clear why training on FGSM will make it do better on optimization methods). However: it does look like the authors properly evaluate the defense given equation (5).\n\nPerforming the adaptive attack is what is missing from most prior work (and, indeed, even from many of the papers submitted here). I would much rather see a paper with a proper evaluation and some followup questions on the results than one that omits the evaluation entirely and therefore has no questions. I hope the authors are not penalized for this.", "Thank you for these comments. We have uploaded a new version to address them.\n\n-- Understanding of Table 2:\nWe would like to clarify that the detectors used in Table 2 to detect other attacks, in the previous version, were trained with 20% more data than those used in Table 1 --- that is, the detectors in Table 2 were trained on the training set (80%) plus the test set (20%), whereas those in Table 1 were trained only on the training set (80%). In the latest version of the paper, we have fixed this inconsistency and provided updated results for Table 2, using exactly the same amount of training data (80%) as was used for Table 1. Moreover, we have also provided additional explanations about the exceptionally poor performance of BU measure transferring from FGM to BIM-b.\n \nMeanwhile, we would like to point out that the Opt (or CW) attack we used in this paper is its general L2 version, different to many of its variants designed to attack specific defenses. Code on GitHub by the authors of Opt: https://github.com/carlini/nn_robust_attacks/blob/master/l2_attack.py\nThe reason is that we are more interested in the understanding of the shared properties across different types of attack strategies including Opt and also many others, so that to motivate defense against adversarial attacks in general (not limited to Opt attack). You are very welcome to check the consistency of the code --- we appreciate your interest in this.\n \nThe following responses are related to two papers: 1) the original defense paper of KD and BU (Feinman et al. (2017)), and 2) the latest attack paper (Carlini&Wagner (2017a)).\n \n-- KD works on CIFAR-10?\nYes, our result in Table 1 indicates this. This contradicts the statement in Paper 2 Sect. 5.2, where it says that KD cannot work on CIFAR, as 80% of the time the adversarial example has a higher likelihood score than the original image. However, our result is consistent with the result in the original Feinman paper (Table 2, Paper 1). We found not only that Opt plus KD can detect Opt, but also that the simpler attack FGM plus KD can detect Opt (Table 2). The deeper reasons behind this require further exploration.\n \n--‘Opt’ seems to fail against the KD detector?\nYes. But it is the general Opt L2 attack that failed the KD detector, not the KD-adapted version of Opt. In Paper 2, the KD detector has been attacked by a variant of Opt specifically adapted to target KD measure. But we did not use this version of Opt, as we are more interested in the general Opt attack rather than a specially-adapted version of it.", "Table 2: What happens in the case of BIM-b? \n\nAlso, it seems very strange that for many cases in Table 2, training with FGM seems to give better detection rates that training with the actual attack used? (In 7/15 cases).\n\nAlso, the 'opt' approach seems to fail on KD as well, while it has been shown to break KD easily earlier in literature (as mentioned in intro). Why does 'opt' fail on KD in the current setting? Was something changed with KD?\n\nSection 5.2 of Carlini & Wagner 2017a indicates an approach to make these attacks successful in the MNIST setting. And, they suggest that KD completely breaks down in the CIFAR-10 setting whilst you report a 91% accuracy. I suspect there might be some issue with implementing the CW attack, since the numbers are in complete contrast to that reported in 2017a, to be sure I would just check with the implementation at https://github.com/carlini/nn_breaking_detection/blob/master/density_estimation.py.", "We are glad that you like our work and would like to thank you for the summary. The typo has been fixed in the updated version.", "We appreciate your candor about this research topic. At a high level, although deep neural networks have demonstrated superior performance for many tasks, certain properties which can affect their behavior (such as subspaces, manifold properties) are still not well understood. A better understanding of these properties can motivate more robust/efficient/effective deep learning models, which can in turn lead to further improving their performance. Adversarial vulnerability is one such property that jeopardizes the reliability of deep neural network learning models, as very small changes on inputs can sometimes lead to completely incorrect predictions (such changed inputs are called adversarial inputs). In this paper, we investigate the expansion dimensional property of the subspaces surrounding such adversarial inputs and show that it can be used as an effective characteristic for detecting such inputs. We hope our work can provide some new insights into adversarial subspaces and their detection.", "Thank you very much for these comments. We address them in detail below.\nQ1: The adversarial attack technique needs to be available for training.\nA1: Thank you for highlighting this. The ability to detect unseen adversarial attacks is an interesting issue. We have conducted some additional experiments to evaluate the generalizability of our LID based detector, see “Generalizability Analysis”, Section 5.3. The result illustrates that our LID-based detector generalizes well to detect previously unseen adversarial attacks. \n \nQ2: (Minor) Page 3, Eq 1: The expansion dimension cares more about the probability mass.\nA2: Yes, we agree. The suggested explanation has been added to Paragraph 1, Section 3.\n \nQ3: Alg 1, L3: is this where the adversarial attacks are applied?\nA3: Yes. We have updated the description of the algorithm to clarify this (see Paragraph 2 in \"Using LID to Characterize Adversarial Examples\", Section 4).\n \nQ4: Alg 1, L12 & Section 5, Experimental Setup: leave-one-out estimate?\nA4: Yes, the query point x is \"left out\". We have added extra explanations of how Eq (4) (as used in L12-14, Alg 1) works in the last paragraph of Section 3.\n \nQ5: Section 4 and Alg 1: assuming training data is free of adversarial examples?\nA5: Yes. This is a reasonable assumption and is the one which has been made in previous work. We have highlighted this in the 2nd paragraph of \"Using LID to Characterize Adversarial Examples\", Section 4." ]
[ 8, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1gJ1L2aW", "iclr_2018_B1gJ1L2aW", "iclr_2018_B1gJ1L2aW", "ryeCOusGf", "rka1XhcGf", "ByIbHZKGG", "Hk9lgX_Mz", "Hk9lgX_Mz", "SkjsvgdMG", "ByhCuVUff", "rJ1LK7zfM", "rkARQJwez", "H1wVDrtgM", "S1tVnWqxM" ]
iclr_2018_HkwZSG-CZ
Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity.
accepted-oral-papers
Viewing language modeling as a matrix factorization problem, the authors argue that the low rank of word embeddings used by such models limits their expressivity and show that replacing the softmax in such models with a mixture of softmaxes provides an effective way of overcoming this bottleneck. This is an interesting and well-executed paper that provides potentially important insight. It would be good to at least mention prior work related to the language modeling as matrix factorization perspective (e.g. Levy & Goldberg, 2014).
train
[ "By7UbmtHM", "B1ETY-KBM", "SyTCJyqeM", "r1zYOdPgz", "B18hETI4f", "B1v_izpxM", "Hk9G7RsmM", "S1pkkLc7G", "BySeO9CGz", "HJ985q0fz", "SktCu50MM", "rkkKFc0GG", "Hk_Hu9Rfz", "HkkWuBrZG", "HkXiuNE-f", "Hk1nEGUxM", "rJFe7vDCW", "B1VaeCHeM", "BJt665wkf", "SyRk1cPkM", "B1D4D3wRb", "S10GwzSCb" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "author", "author", "public", "public", "author", "author", "public", "author", "public", "public", "public" ]
[ "Thanks for pointing out this related piece we’ve missed. Salute!\n\nWe would like to clarify that using a mixture structure is by no means a new idea, as we have noted in Related Work. Instead, the insight on model expressiveness, the integration with modern architectures and optimization algorithms, the SOTA performance, and the consistency between theory and practice, are our foci.\n\nMoreover, there’s an essential difference technically. In the reference, P(w | h) = sum_m P(w|m) P(m|h), while in MoS, P(w | h) = sum_m P(w|m,h) P(m|h). In other words, each mixture component is independent of the history in the reference model, while MoS explicitly models such dependency, making each component much more powerful.\n", "See the paper from JHU 1995 at:\nhttps://www.researchgate.net/profile/Mitchel_Weintraub/publication/246170642_Fast_Training_and_Portability/links/57505b7008aefe968db72bef/Fast-Training-and-Portability.pdf\nSee section 3, pp. 6-8. This describes the factoring of a LM into a mixture of tied multinomials. The mixture weight computation is slightly different, but the factoring of the overall LM distribution into a set of tied distributions was presented at this workshop.\n", "The authors has addressed my concerns, so I raised my rating. \n\nThe paper is grounded on a solid theoretical motivation and the analysis is sound and quite interesting.\n\nThere are no results on large corpora such as 1 billion tokens benchmark corpus, or at least medium level corpus with 50 million tokens. The corpora the authors choose are quite small, the variance of the estimates are high, and similar conclusions might not be valid on a large corpus. \n\n[1] provides the results of character level language models on Enwik8 dataset, which shows regularization doesn't have much effect and needs less tuning. Results on this data might be more convincing.\n\nThe results of MOS is very good, but the computation complexity is much higher than other baselines. In the experiments, the embedding dimension of MOS is slightly smaller, but the number of mixture is 15. This will make it less usable, I think it's necessary to provide the training time comparison.\n\nFinally experiments on machine translation or speech recognition should be done and to see what improvements the proposed method could bring for BLEU or WER. \n\n[1] Melis, Gábor, Chris Dyer, and Phil Blunsom. \"On the state of the art of evaluation in neural language models.\" arXiv preprint arXiv:1707.05589 (2017).\n\n[2] Joris Pelemans, Noam Shazeer, Ciprian Chelba, Sparse Non-negative Matrix Language Modeling, Transactions of the Association for Computational Linguistics, vol. 4 (2016), pp. 329-342\n\n[3] Shazeer et al. (2017). Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. ICLR 2017\n", "The authors argue in this paper that due to the limited rank of the context-to-vocabulary logit matrix in the currently used version of the softmax output layer, it is not able to capture the full complexity of language. As a result, they propose to use a mixture of softmax output layers instead where the mixing probabilities are context-dependent, which allows to obtain a full rank logit matrix in complexity linear in the number of mixture components (here 15). This leads to improvements in the word-level perplexities of the PTB and wikitext2 data sets, and Switchboard BLEU scores.\n\nThe question of the expressiveness of the softmax layer, as well as its suitability for word-level prediction, is indeed an important one which has received too little attention. This makes a lot of the questions asked in this paper extremely relevant to the field. However, it is unclear that the rank of the logit matrix is the right quantity to consider. For example, it is easy to describe a rank D NxM matrix where up to 2^D lines have max values at different indices. Further, the first two \"observations\" in Section 2.2 would be more accurately described as \"intuitions\" of the authors. As they write themselves \"there is no evidence showing that semantic meanings are fully linearly correlated.\" Why then try to link \"meanings\" to basis vectors for the rows of A?\n\nTo be clear, the proposed model is undoubtedly more expressive than a regular softmax, and although it does come at a substantial computational cost (a back-of-the envelope calculation tells us that computing 15 components of 280d MoS takes the same number of operations as one with dimension 1084 = sqrt (280*280*15)), it apparently manages not to drastically increase overfitting, which is significant.\n\nUnfortunately, this is only tested on relatively small data sets, up to 2M tokens and a vocabulary of size 30K for language modeling. They do constitute a good starting place to test a model, but given the importance of regularization on those specific tasks, it is difficult to predict how the MoS would behave if more training data were available, and if one could e.g. simply try a 1084 dimension embedding for the softmax without having to worry about overfitting.\n\nAnother important missing experiment would consist in varying the number of mixture components (this could very well be done on WikiText2). This could help validate the hypothesis: how does the estimated rank vary with the number of components? How about the performance and pairwise KL divergence? \n\nThis paper offers a promising direction for language modeling research, but would require more justification, or at least a more developed experimental section.\n\nPros:\n- Important starting question\n- Thought-provoking approach\n- Experimental gains on small data sets\n\nCons:\n- The link between the intuition and reality of the gains is not obvious\n- Experiments limited to small data sets, some obvious questions remain", "The authors have added some important experiments whose results support their claim, and I do believe that the current version of the paper makes a stronger case. I am still not satisfied that the rank explanation fully captures what is going on there, although it certainly correlates with better results, but this paper will provide an important data point for future research and I am raising my score from 5 to 7.", "Language models are important components to many NLP tasks. The current state-of-the-art language models are based on recurrent neural networks which compute the probability of a word given all previous words using a softmax function over a linear function of the RNN's hidden state. This paper argues the softmax is not expressive enough and proposes to use a more flexible mixture of softmaxes. The use of a mixture of softmaxes is motivated from a theoretical point of view by translating language modeling into matrix factorization.\n\nPros:\n--The paper is very well written and easy to follow. The ideas build up on each other in an intuitive way.\n--The idea behind the paper is novel: translating language modeling into a matrix factorization problem is new as far as I know.\n--The maths is very rigorous.\n--The experiment section is thorough.\n\nCons:\n--To claim SOTA all models need to be given the same capacity (same number of parameters). In Table 2 the baselines have a lower capacity. This is an unfair comparison\n--I suspect the proposed approach is slower than the baselines. There is no mention of computational cost. Reporting that would help interpret the numbers. \n\nThe SOTA claim might not hold if baselines are given the same capacity. But regardless of this, the paper has very strong contributions and deserves acceptance at ICLR.", "Thank you for the update that improved an already very good paper.\n\nInformal Rating: 8\nConfidence: 4\n", "I would strongly recommend this paper be accepted for publication. This paper uncovers a fundamental issue with large vocabularies and goes beyond just analyzing the issue by proposing a helpful method of addressing this. Whilst I was already excited by the initial version of this paper, the follow up work that has been done by the authors is even more informative. Understanding and considering the rank bottlenecks of our models seems an important consideration for future models.\nIf I can answer any follow up questions in support of this paper I would be happy to.\n\nInformal Rating: 8\nConfidence: 5\n(work directly in this field and have recreated many aspects of the results since publication)", "To provide faithful answers to the reviews and comments, we have conducted a series of additional experiments and updated the paper accordingly. The core changes are summarized as follows:\n (1) we add a “large-scale language modeling experiment” using the 1B Word Dataset (section 3.1)\n (2) we give three more pieces of “evidence supporting our theory” that the rank is the key bottleneck of Softmax and MoS improves the performance by breaking the rank bottleneck (section 3.2):\n - Empirically, before the rank saturates to the full rank, using more mixture components in MoS continues to increase the rank of the log-probability matrix. Further, the higher the rank is, the lower the perplexity that can be achieved. \n - MoS has a similar generalization gap compared to Softmax, which rules out the concern that the improvement actually comes from some unexpected regularization effects of MoS.\n - In character-level language modeling, since the largest possible rank is upper bounded by the limited vocabulary size, Softmax does not suffer from the rank bottleneck. In this case, Softmax and MoS have almost the same performance, which matches our analysis.\n (3) we perform “training time analysis” for MoS and provide empirical training time comparison (section 3.3 & Appendix C.3) \n", "Thank you for the valuable feedback.\n\n[[Rank and meanings]] It could be possible that A is low-rank for a natural language as it is hard to rule out this possibility rigorously, but we hypothesize that A is high-rank. Our hypothesis is supported by our intuitive reasoning and empirical experiments. Empirically, we give three more pieces of evidences supporting our hypothesis that the rank is the key bottleneck of Softmax and MoS improves the performance by solving the rank bottleneck (section 3.2):\n - Before the rank saturates to the full rank, using more mixture components in MoS continues to increase the rank of the log-probability matrix. Further, when the rank increases, the perplexity also decreases. \n - MoS has a similar generalization gap compared to Softmax, which rules out the concern that the improvement actually comes from some unexpected regularization effects of MoS.\n - In character-level language modeling, since the largest possible rank is upper bounded by the limited vocabulary size, Softmax does not suffer from the rank bottleneck. In this case, Softmax and MoS have almost the same performance, which matches our analysis.\nWe agree that linking semantic meanings to bases lacks rigor and this would be better described as intuitions. We have made corresponding changes in the paper.\n\n[[Computation vs. Capacity]]: It is true that MoS involves a larger amount of computation compared to the standard Softmax. However, [Collins et al] suggests the capacity of neural language models is mostly related to the number of parameters, rather than computation. Moreover, powerful models often require a larger amount of computation. For example, the attention based seq2seq model involves much more computation compared to the vanilla seq2seq.\n\n[[Large-scale experiment]]: We have added a “large-scale language modeling experiment” using the 1B Word Dataset (section 3.1), where MoS significantly outperforms the baseline model with a large margin. This indicates that MoS consistently outperforms Softmax, regardless of the scale of the dataset. Also, note that PTB and WT2 are two de-facto benchmarks widely used in previous work on language modeling. None of the following papers had experiments on datasets larger than WT2: Zoph & Le ICLR 2017, Zilly et al ICML 2017, Inan et al ICLR 2017, Grave et al ICLR 2017, Merity et al ICLR 2017.\n\n[[Varying the number of mixtures]]: Thanks for the suggestion. We performed this experiment, whose result is summarized in the second bullet point of section 3.2 (updated version). As expected, the number of mixture components is positively correlated with the empirical rank. More importantly, before the rank saturates to the full rank, MoS with a higher rank leads to a better performance (lower perplexity). \n\n------------------------------------------------------------------------------------------------------------------------\n[Collins et al] Capacity and Trainability in Recurrent Neural Networks\n", "Thank you for the valuable comments.\n\n[[Claim of SOTA]]: We believe the 2M difference in the number of parameters is negligible compared to the number of parameters we use (i.e., 35M). In fact, we ran MoS in another setting with 31M parameters and got 63.59 on WT2 without finetuning, compared to 63.33 obtained by our best-performing model.\n\n[[Training time]]: Thanks for the suggestion and we have added the training time analysis for MoS and provided empirical numbers in the updated versions of the paper (section 3.3 & Appendix C.3). In general, computational wall time of MoS is actually sub-linear w.r.t. the number of mixture components. In most settings, we observe a two to three times slowdown compared to Softmax when using up to 15 components for MoS. We believe such additional computational cost is acceptable for the following reasons:\n - MoS is highly parallelizable, meaning that using more machines can always speed up the computation almost linearly.\n - The field of deep learning systems (both hardware and software) is making rapid progress. It might be possible to further optimize MoS on GPUs for fast computation. More developed hardware systems would also further reduce the computational cost.\n - Historically, important techniques sometimes come with an additional computational cost, e.g., LSTM, attention, deep ResNets. We believe that with MoS, the extra cost is reasonable and the gain is substantial.\n", "Thanks for your valuable comments.\n\n[[Large-scale experiment]]: We’ve added a “large-scale language modeling experiment” using the 1B Word Dataset (section 3.1), where MoS significantly outperforms the baseline model by a large margin. This indicates that MoS consistently outperforms Softmax, regardless of the scale of the dataset. Also, note that PTB and WT2 are two de-facto benchmarks widely used in previous work on language modeling. None of the following papers had experiments on datasets larger than WT2: Zoph & Le ICLR 2017, Zilly et al ICML 2017, Inan et al ICLR 2017, Grave et al ICLR 2017, Merity et al ICLR 2017.\n\n[[Character-level LM]]: Firstly, note that the largest possible rank of the log-probability matrix is upper bounded by the vocabulary size. In character-level LM, the vocabulary size is usually much smaller than the embedding size. In this case, Softmax does not suffer from the rank bottleneck problem, and we expect MoS and Softmax to achieve similar performance in practice. To verify our expectation, we perform character-level LM experiment on the text8 dataset, where MoS and Softmax indeed achieve almost the same performance (section 3.2 & appendix C.2 in the updated version). \n\n[[Training time]]: We have added the training time analysis for MoS and provided empirical numbers in the updated versions of the paper (section 3.3 & Appendix C.3). In general, computational wall time of MoS is actually sub-linear w.r.t. the number of mixture components. In most settings, we observe a two to three times slowdown compared to Softmax when using up to 15 components for MoS. We believe such additional computational cost is acceptable for the following reasons:\n - MoS is highly parallelizable, meaning that using more machines can always speed up the computation almost linearly.\n - The field of deep learning systems (both hardware and software) is making rapid progress. It might be possible to further optimize MoS on GPUs for fast computation. More developed hardware systems would also further reduce the computational cost.\n - Historically, important techniques sometimes come with an additional computational cost, e.g., LSTM, attention, deep ResNets. We believe that with MoS, the extra cost is reasonable and the gain is substantial.\n\n[[Application to MT/ASR]]: We believe this is best left to future research, as performing rigorous experiments and careful comparison for such a real-world applications is non-trivial. And we believe language modeling is of its own importance already.\n", "Thank you for your valuable feedback. We believe that, on large datasets where explicit regularization techniques like dropout are not crucial, the improvement on training perplexity does give information about whether MoS has an unintended regularization effect that can improve the performance.\n\nThus, in our updated version, we conduct an experiment on the 1B Word dataset, where no dropout or other regularization technique is used. As described in the third bullet point of section 3.2, in this regularization free setting, MoS and Softmax have the same generalization gap (i.e., the gap between training and test error), and performance improvement is fully reflected on the training perplexity. Hence, the superiority of MoS is not caused by some unexpected regularization but improved expressiveness.\n\nWe also provide additional evidence in section 3.2 (updated version) to support our theory that achieving a higher rank is the key to the excellence of MoS. ", "It's true that regularization doesn't play as important a role with large datasets as it does with small datasets, but enwik8 is character based and it's unclear whether this paper's arguments would apply. Sticking to word level corpora, I'd much sooner recommend Wikitext-103 than the Billion Word corpus which has issues.\n\nFurthermore, language modelling improvements are interesting in their own right without having to validate them via MT or speech recognition.", "How much does training perplexity improve with MoS compared to the baseline model?\n\nIf it does improve substantially, that gives more credence to the rank argument.\nIf it doesn't, then we might be observing unexpected regularization effects.", "We haven’t tried larger batch sizes since it does not fit into the memory. We did try training the baseline models with batch size 20 (compared to 40 in the original paper) on Penn Treebank, and the performance degrades a little bit (from 58.95 to 59.10), which indicates that using small batch sizes does not improve the baseline model. The AWD-LSTM paper also confirmed that “relatively large batch sizes (e.g., 40-80) performed better than smaller sizes (e.g., 10-20) for NT-ASGD”. Thus it is likely that MoS will perform even better with larger batch sizes. \n\nOn Penn Treebank, we also tried using lr=20 while keeping batch_size=40 for Softmax, since lr=20 is used for MoS. It turned out this significantly worsened the perplexity of Softmax from 58.95 to 61.39. Combined with the previous analysis, it suggests the performance improvement of MoS does not come from a better choice of learning rate and/or batch size. \n\nMoreover, in our preliminary experiments, we compared MoS with Softmax using the PyTorch demo code (https://github.com/pytorch/examples/tree/master/word_language_model). MoS (nhid=1059, dropout=0.55, n_softmax=20) obtains a perplexity of 68.04, compared to 72.30 obtained by Softmax (nhid=1500, dropout=0.65). In this experiment, the batch sizes and learning rates are the same and we are again seeing clear gains. Note that we reduced nhid to obtain comparable model sizes, and reduced dropout since the hidden unit size is smaller in our case.\n", "Thank you for the comments. We will include the related papers in our later version. For now, we will summarize the difference between the mentioned work and ours as follows:\n\n1. As discussed in Section 2.3, there is a tradeoff between expressiveness and generalization. Ngram models are expressive but do not generalize well due to data sparsity (or curse of dimensionality). Hutchinson et al. attempted to improve generalization using sparse plus low-rank Ngram models. By contrast, neural language models with standard Softmax generalize well but do not have enough expressiveness (as shown in Sections 2.1 and 2.2). This motivates our high-rank approach that improves expressiveness without sacrificing generalization. In a nutshell, the difference is that we aim to improve expressiveness while Hutchinson et al. aimed to improve generalization, and such a difference is a result of the intrinsic difference between neural and Ngram language models.\n\n2. There are two key differences between the MoD model (Neubig and Dyer) and ours. \n - Firstly, the motivations are totally different. Neubig and Dyer proposed to hybridize Ngram and neural language models to unify and benefit from both. In comparison, we identify the Softmax bottleneck problem using a matrix factorization formulation, and hence motivate our approach to break the bottleneck.\n - Secondly, our approach is end-to-end and achieves a good tradeoff between generalization and expressiveness (Cf. Section 2.3). In comparison, MoD might not generalize well since Ngram models, which have poor generalization, are included in the mixture. Moreover, the Ngram parameters are fixed in MoD, which limits its expressiveness.\nDespite the differences, note that it is possible to combine our work with theirs to further improve language modeling.\n", "Thank you for your extensive response.\n\nI fully agree with the fact that the baseline model's performance using MoS hyper-parameters is worse than MoS is a necessary condition. I wrongly assumed, that you also find that to be sufficient condition. I concluded that based on this sentence from your paper 'On the other hand, training AWD-LSTM using MoS hyper-parameters severely hurts the performance, which rules out hyper-parameters as the main source of improvement'. It is probably just an unfortunate sentence and you may want to paraphrase it.\n\nI found your arguments very persuasive. My only concern is that you use very small batch sizes that make training procedure very slow (is it more than a week on a single GPU for WT2?). Have you tried to train with batch sizes comparable with state-of-the-art model that you compare with?\n\nThank you again for making your statements more clear to me.", "Thanks for your comments. We believe our comparison is fair and MoS is indeed better than the baseline, with the following reasons:\n\nFirstly, the hyper-parameters for MoS are chosen by trial and error (i.e., graduate student descent) rather than an extensive hyper-parameter search such as the one used by [Melis et al]. The baseline AWD-LSTM uses a similar strategy in searching for hyper-parameters. Therefore, both MoS and the baseline are well tuned and comparable.\n\nSecondly, the baseline was the previous state-of-the-art (SOTA) (before our paper is released). It is usually true that one would not expect hyper-parameter tuning alone to substantially improve SOTA results on widely-studied benchmarks.\n\nThirdly, since MoS introduces another hidden layer, the number of parameters would significantly increase if we kept the embedding size and hidden size the same as the baseline, which would lead to an unfair comparison [Collins et al]. Thus, we have to trim the network size and modify related hyper-parameters accordingly. \n\nFourthly, the fact that the baseline with MoS hyper-parameters is worse than MoS is just a necessary condition of our argument that MoS is better than the baseline. (And we do not claim it is sufficient; sufficiency is proved by comparison with SOTA).\n\n[Melis et al] On the State of the Art of Evaluation in Neural Language Models\n[Collins et al] Capacity and Trainability in Recurrent Neural Networks\n", "In ablation studies, during comparison between MoS and the baseline (AWD-LSTM), the authors use the hyperparameters tuned on MoS for the evaluation of AWD-LSTM. It is claimed that the worse performance of AWD-LSTM under these hyperparameters implies that the better performance of MoS is not due to hyperparamters. I think a fairer comparison would be to search for the best hyperparameters individually for each task. Can the authors elaborate on the motivation for this approach? Also, was an evaluation of MoS performed using the AWD-LSTM hyperparameters provided by Merity et al 2017, and only tuning the MoS specific hyperparameters on top of that? I believe this would be a fairer comparison (if finding the best hyperparameters for each model individually using a comparable extensive grid search is too expensive).", "Thanks for your response. Just so it's clear, I want to reiterate that I really liked your paper and thought it was an original and worthwhile contribution. ", "This seems like an interesting idea and a great result!\n\nWhen I was reading what you wrote about potential easy fixes, I was reminded of the work from Brian Hutchinson where he shows how back-off smoothing is a low-rank approximation of the \"A\" matrix. (See, \"Low Rank Language Models for Small Training Sets\" by Hutchinson, Ostendorf, and Fazel.) In later work by the same authors they use a sparse plus low-rank model to remedy the deficiencies of the low-rank approximation. (See, \"A Sparse Plus Low Rank Maximum Entropy Language Model.\") That line of work ended up being impractical for large datasets and the solution that you have come up with seems much more promising. \n\nAnother paper that I think is related to your Mixture of Softmaxes is the Mixture of Distributions model from Neubig and Dyer's paper \"Generalizing and Hybridizing Count-based and Neural Language Models.\" They are talking about mixing n-gram distributions with a softmax distribution. Their model is another way of \"breaking the softmax bottleneck\" although they don't motivate it as such. " ]
[ -1, -1, 7, 7, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "B1ETY-KBM", "iclr_2018_HkwZSG-CZ", "iclr_2018_HkwZSG-CZ", "iclr_2018_HkwZSG-CZ", "HJ985q0fz", "iclr_2018_HkwZSG-CZ", "Hk_Hu9Rfz", "iclr_2018_HkwZSG-CZ", "iclr_2018_HkwZSG-CZ", "r1zYOdPgz", "B1v_izpxM", "SyTCJyqeM", "HkXiuNE-f", "SyTCJyqeM", "iclr_2018_HkwZSG-CZ", "B1VaeCHeM", "S10GwzSCb", "BJt665wkf", "SyRk1cPkM", "iclr_2018_HkwZSG-CZ", "rJFe7vDCW", "iclr_2018_HkwZSG-CZ" ]
iclr_2018_Sk2u1g-0-
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments
Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.
accepted-oral-papers
Looks like a great contribution to ICLR. Continuous adaptation in nonstationary (and competitive) environments is something that an intelligent agent acting in the real world would need to solve and this paper suggests that a meta-learning approach may be quite appropriate for this task.
train
[ "ryBakJUlz", "BJiNow9gG", "SyK4pmsgG", "B19e7vSmf", "ByEAfwS7z", "B1YwMwHmf", "HyyfMDSQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This is a dense, rich, and impressive paper on rapid meta-learning. It is already highly polished, so I have mostly minor comments.\n\nRelated work: I think there is a distinction between continual and life-long learning, and I think that your proposed setup is a form of continual learning (see Ring ‘94/‘97). Given the proliferation of terminology for very related setups, I’d encourage you to reuse the old term.\n\nTerminology: I find it confusing which bits are “meta” and which are not, and the paper could gain clarity by making this consistent. In particular, it would be good to explicitly name the “meta-loss” (currently the unnamed triple expectation in (3)). By definition, then, the “meta-gradient” is the gradient of the meta-loss -- and not the one in (2), which is the gradient of the regular loss.\n\nNotation: there’s redundancy/inconsistency in the reward definition: pick either R_T or \\bold{r}, not both, and maybe include R_T in the task tuple definition? It is also confusing that \\mathcal{R} is a loss, not a reward (and is minimized) -- maybe use another symbol?\n\nA question about the importance sampling correction: given that this spans multiple (long) trajectories, don’t the correction weights become really small in practice? Do you have some ballpark numbers?\n\nTypos:\n- “event their learning”\n- “in such setting”\n- “experience to for”\n", "This paper proposed a gradient-based meta-learning approach for continuous adaptation in nonstationary and adversarial environment. The idea is to treat a nonstationary task as a sequence of stationary tasks and train agents to exploit the dependencies between consecutive tasks such that they can deal with nonstationarities at test time. The proposed method was evaluated based on a nonstationary locomotion and within a competitive multi agent setting. For the later, this paper specifically designed the RomoSumo environment and defined iterated adaptation games to test various aspect of adaptation strategies. The empirical results in both cases demonstrate the efficacy of the proposed meta-learned adaptation rules over the baselines in the few-short regime. The superiority of meta-learners is further justified on a population level.\n\nThe paper addressed a very important problem for general AI and it is well-written. Careful experiment designs, and thorough comparisons make the results conniving. I\n\nFurther comments:\n\n1. In the experiment the trajectory number seems very small, I wonder if directly using importance weight as shown in (9) will cause high variance in the performance?\n\n2. One of the assumption in this work is that trajectories from T_i contain some information about T_{i+1}, I wonder what will happen if the mutually information is very small between them (The extreme case is that two tasks are independent), will current method still perform well?\n\nP7, For the RL^2 policy, the authors mentioned that “…with a given environment (or an opponent), reset the state once the latter changes” How does the agent know when an environment (or opponent) changes? \n\nP10, “This suggests that it meta-learned a particular…” This sentence need to be rewritten.\n\nP10, ELO is undefined\n", "---- Summary ----\nThis paper addresses the problem of learning to operate in non-stationary environments, represented as a Markov chain of distinct tasks. The goal is to meta-learn updates that are optimal with respect to transitions between pairs of tasks, allowing for few-shot execution time adaptation that does not degrade as the environment diverges ever further from the training time task set.\n\nDuring learning, an inner loop iterates iterates over consecutive task pairs. For each pair, (T_i, T_{i+1}) trajectories sampled from T_i are used to construct a local policy that is then used to sample trajectories from T_{i+1}. By calculating the outer-loop policy gradient with respect to expectations of the trajectories sampled from T_i, and the trajectories sampled from T_{i+1} using the locally optimal inner-loop policy, the approach learns updates that are optimal with respect to the Markovian transitions between pairs of consecutive tasks.\n\nThe training time optimization algorithm requires multiple passes through a given sequence of tasks. Since this is not feasible at execution time, the trajectories calculated while solving task T_i are used to calculate updates for task T_{i+1} and these updates are importance weighted w.r.t the sampled trajectories' expectation under the final training-time policy.\n\nThe approach is evaluated on a pair of tasks. In the locomotion task, a six legged agent has to adapt to deal with an increasing inhibition to a pair of its legs. In the new RoboSumo task, agents have to adapt to effectively compete with increasingly competent components, that have been trained for longer periods of time via self-play.\n\nIt is clear that, in the locomotion task, the meta learning strategy maintains performance much more consistently than approaches that adapt through PPO-tracking, or implicitly by maintaining state in the RL^2 approach. This behaviour is less visible in the RoboSumo task (Fig 5.) but it does seem to present. Further experiments show that when the adaptation approaches are forced to fight against each other in 100 round iterated adaptation games, the meta learning strategy is dominant. However, the authors also do point out that this behaviour is highly dependent on the number of episodes allowed in each game, and when the agent can accumulate a large amount of evidence in a given environment the meta learning approach falls behind adaptation through tracking. The bias that allows the agent to learn effectively from few examples precludes it from effectively using many examples.\n\n---- Questions for author ----\nUpdates are performed from \\theta to \\phi_{i+1} rather than from \\phi_i to \\phi_{i+1}. Footnote 2 states that this was due to empirical observations of instability but it also necessitates the importance weight correction during execution time. I would like to know how the authors expect the sample in Eqn 9 to behave in much longer running scenarios, when \\pi_{\\phi} starts to diverge drastically from \\pi_{\\theta} but very few trajectories are available.\n\nThe spider-spider results in Fig. 6 do not support the argument that meta learning is better than PPO tracking in the few-shot regime. Do you have any idea of why this is?\n\n---- Nits ----\nThere is a slight muddiness of notation around the use of \\tau in lines 7 & 9 in of Algorithm 1. I think it should be edited to line up with the definition given in Eqn. 8. \n\nThe figures in this paper depend excessively and unnecessarily on color. They should be made more printer, and colorblind, friendly.\n\n---- Conclusion ----\nI think this paper would be a very worthy contribution to ICLR. Learning to adapt on the basis of few observations is an important prerequisite for real world agents, and this paper presents a reasonable approach backed up by a suite of informative evaluations. The quality of the writing is high, and the contributions are significant. However, this topic is very much outside of my realm of expertise and I am unfamiliar with the related work, so I am assigning my review a low confidence.", "Thank you for carefully reading the paper and the thoughtful comments. We answer the questions below:\n\nRelated work:\nWe agree that continuous adaptation is indeed a variation of continual learning. The updated version of the paper now points this out.\n\nTerminology:\nThank you for suggestions. We have improved our terminology and notation throughout the paper, explicitly named the meta-loss, and renamed the inner loop gradient update (as given in Eq. 2) from “meta-update” to “adaptation update”.\n\nNotation:\nInitially, \\mathcal{R}_T was standing for the risk (i.e., the expected loss, as commonly used in machine learning literature). This notation was indeed a bit confusing in the RL context where R is often used for rewards, so we have altered it.\n\nImportance sampling:\nGood point. Eq. 9 gives a general form of the estimator for \\phi. In practice, the adaptation gradient (i.e., the gradient of L_{T_{i-1}} as now given in Eq. 9) decouples into a sum over time steps, so we compute importance weights for each time step (i.e., for each action) separately. The effective sample size in our experiments was no less than 20% of the given sample size. (Also, see our answer to a similar question asked by R2.)", "Thank you for carefully reading the paper and the thoughtful comments. We answer the questions below:\n\n1. Good point, the estimator in Eq. 9 may cause high variance in \\phi. Three points to note:\n(i) We used importance weight correction only at execution time (indeed, the variance of the estimator hindered learning in such a regime; see footnote 2).\n(ii) Even though Eq. 9 shows the sum over K episodes, each episode consists of multiple time steps (typically 100-500 time steps, depending on the experiment) each of which is treated as a separate sample and gets an importance weight. Even with a limited number of episodes, we get quite a substantial number of time steps (with 3 episodes of 500 steps each, we get 1,500 time steps). In our experiments, the effective sample size was always reasonable (more than 20%), which worked at execution time (but not for learning).\n(iii) To compute adaptation updates using Eq. 9 in practice, meta-learners not only used the immediate past episode, but multiple previous episodes (see section 5.1, paragraph 2), which increased the number of samples and further helped to reduce the variance of the estimator.\n\n2. No, the method is not designed to work in the regime with no mutual information between T_i and T_{i+1}. Our meta-learning approach targets to solve a zero-shot problem (i.e., do well at T_{i+1} without previous interaction experience with that particular task) knowing that tasks are sequentially dependent. If the tasks are independent, having some initial interaction with each new task is perhaps the only way to solve the problem.\n\n3. In our setup, the number of episodes after which the environment/opponent changes is fixed. Moreover, we assume that the agent knows a priori the number of episodes or rounds after which the environment or opponent changes. This information is directly used by RL^2.", "Thank you for carefully reading the paper and thoughtful questions. We have improved the notation and the color-coding in the figures. We answer the questions below:\n\n- When \\pi_{\\phi} significantly diverges from \\pi_{\\theta}, the estimate given Eq. 9 would become of very high variance. In our setup, \\pi_{\\phi} was always at most a few gradient steps away from \\pi_{\\theta} in the parameter space. This gave difference in behaviors while keeping the effective sample size reasonable (always more than 20%). Much longer running scenarios may require a better estimator (i.e., of lower variance) which should also take into account the sequential structure of the tasks (e.g., a particle filter).\n\n- Good point, different methods yielded similar performance in the spider-spider experiments. This is because the agents tended to learn very similar behaviors regardless of the algorithm. The spectrum of behaviors learned by the agent highly depends on the morphology. From videos, we noticed that spiders always picked up a very particular fighting style, using front legs to kick the opponent and back legs to stabilize the posture, and never altered it during adaptation. This could be due to, perhaps, optimality of such behavior, but we did not further quantify this effect.\n", "We thank reviewers for their time and thoughtful feedback.\n\nWe have updated the submission: improved notation throughout the paper, resolved ambiguities, and improved color-coding in the plots. We answer specific questions raised in the reviews by separately replying to each of them." ]
[ 8, 7, 9, -1, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1, -1 ]
[ "iclr_2018_Sk2u1g-0-", "iclr_2018_Sk2u1g-0-", "iclr_2018_Sk2u1g-0-", "ryBakJUlz", "BJiNow9gG", "SyK4pmsgG", "iclr_2018_Sk2u1g-0-" ]
iclr_2018_S1JHhv6TW
Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions
The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design.
accepted-oral-papers
This paper proposes improvements to WaveNet by showing that increasing connectivity provides superior models to increasing network size. The reviewers found both the mathematical treatment of the topic and the experiments to be of higher quality that most papers they reviewed, and were unanimous in recommending it for acceptance in the conference. I see no reason not to give it my strongest recommendation as well.
train
[ "ryLYFULlM", "SyEtTPclG", "B1VRu6hbz", "H1RTfV_Gz", "rJXMGNmWG", "r1zdyNQZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper theoretically validates that interconnecting networks with different dilations can lead to expressive efficiency, which indicates an interesting phenomenon that connectivity is able to enhance the expressiveness of deep networks. A key technical tool is a mixed tensor decomposition, which is shown to have representational advantage over the individual hierarchical decompositions it comprises.\n\nPros:\n\nExisting work have focused on understanding the depth of the networks and established that deep networks are expressively efficient with respect to shallow ones. On the other hand, this paper focused on the architectural feature of connectivity. The problem is fundamentally important and its theoretical development is solid. The conclusion is useful for developing new tools for deep network design. \n\nCons:\n\nIn order to show that the mixed dilated convolutional network is expressively efficient w.r.t. the corresponding individual dilated convolutional network, the authors prove it in two steps: Proposition 1 and Proposition 2. However, in the proof of Proposition 2 (see Theorem 1), the authors only focus on a particular case of convolutional arithmetic circuits, i.e., $g(a,b)= a*b$. In the experiments, see line 4 of page 9, the authors instead used ReLU activation $g(a, b)= max{a+b, 0}$. Can authors provide some justifications of such different choices of activation functions? It would be great if authors can discuss how to generate the activation function in Theorem 1 to more general cases.\n\n\n", "To date the theoretical advantage of deep learning has focused on the concept of \"expressive efficiency\" where one network must grow much larger to replicate functions that another \"more efficient\" network can produce. This has focused so far on depth (i.e. shallow networks have to grow much larger than deeper networks to express the same set of networks)\n\nThe authors explore another dimension here, namely that of \"connectivity\". They study dilated convolutional networks and show that intertwining two dilated convolutional networks A and B at various stages (formalized via mixed tensor decompositions) it is more expressively efficient than not intertwining. \n\nThe authors' experiments support their theory showing that their mixed strategy leads to gains over a vanilla dilated convolutional net.\n\nI found the paper very well written despite its level of mathematical depth (the authors provide many helpful pictures) and strongly recommend accepting this paper.\n\n", "(Emergency review—I have no special knowledge of the subfield, and I was told a cursory review was OK, but the paper was fascinating and I ended up reading fairly carefully)\n\nThis paper does many things. It adds to a series of publications that analyze deep network architectures as parameterized decompositions of intractably large tensors (themselves the result of discretizing the entire input-output space of the network), this time focusing on the WaveNet architecture for autoregressive sequence modeling. It shows (first theoretically, then empirically) that the WaveNet's structural assumption of a single (perfect) binary tree is holding it back, and that WaveNet-like architectures with more complex mixed tree structures perform better.\nThroughout the subject is treated with a high level of mathematical rigor, while relegating proofs and detailed walkthrough explanations to lengthy appendices which I did not have time to review.\n\nSome things I noticed:\n- The notation used is mostly consistent, except for some variation between dots (e.g., in Eq. 2) and bra-kets (in Fig. 1) for inner product. While I think I'm in the minority here, I'd personally be comfortable with going a little bit further with index notation and avoiding the cohabitation of tensor and vector notation styles by using indices even for dot products; that said, either kind of vector notation (dots or brakets) is certainly acceptable too.\n- There are a couple more nomenclature things that might trip up those of us in the deep learning crowd—we're used to referring to \"axes\" or \"dimensions\" of a tensor, but the tensor-analysis world apparently says \"modes\" (and this is called out once in a parenthetical). As \"dimension\" means something different to tensor folks (what DLers usually call the \"size\" of an axis), perhaps standardizing on the shared term \"axes\" would be worthwhile? Not sure if there's a distinction in the tensor world between the words \"axis\" and \"mode.\"\n- The baseline WaveNet is only somewhat well described as \"convolutional;\" the underlying network unit is not always a \"size-2 convolution\" (except for certain values of g) and the \"size-1 convolutions\" that make it up are simply linear transformations. While the WaveNet derives from convolutional sequence architectures (and the choices of g explored in the original paper derive from the CNN literature) it has at least as much in common with recursive/tree-structured network architectures like TreeLSTMs and RNTNs. In fact, the WaveNet is a special case of a recursive neural network with a particular composition function *and a fixed (perfect) binary tree structure.* As this last condition is relaxed in the present paper, making the space of networks under analysis more similar to the traditional space of recursive NNs, it might be worth mentioning this \"alternative history\" of the WaveNet.\n- The choice of mixture nodes in Fig. 3 is a little unfortunate, because it includes all possible mixture nodes and doesn't make it as clear as the text does that a subset of these nodes can be chosen in the general case.\n- While I couldn't follow some of Section 5, I'm a little confused that Theorem 1 appears at first glance to apply only to a non-generalized decomposition (a specific choice of g).\n- Caffe would not have been my first choice for such a complex, hierarchically structure architecture; I imagine it forced the authors to write a significant amount of custom code.", "We thank reviewer for the support! Thank you also for the useful feedback, which will be taken into account in the final version of the manuscript.\n\nSome comments follow:\n- When treating tensors, we currently employ the notations and terms customary in the tensor analysis community (cf. [1]). Following reviewer's suggestions, we will add to the preliminaries in appendix A a table translating between different terminologies.\n- As reviewer points out, the tensor decomposition framework we use to analyze WaveNet applies more generally to recurrent architectures. This is a direction analyzed in a follow-up work, which highlights the connections referenced by reviewer.\n- We are currently migrating our code from Caffe to TensorFlow - the latter indeed admits much simpler implementation.\n\n[1] Wolfgang Hackbusch. Tensor Spaces and Numerical Tensor Calculus. Springer textbook.", "We thank reviewer for the feedback!\n\nAs stated in footnote 9, one may adapt our treatment of Proposition 2 to a different activation (choice of $g(a,b)$) by deriving a result analogous to theorem 1, i.e. by establishing upper and lower bounds on matricization ranks brought forth by a tree decomposition with the respective operator $g(a,b)$. Such bounds were derived in [1] for the choice $g(a,b)=max{a+b,0}$ corresponding to ReLU activation. However, since [1] only treats specific mode trees T and index sets I, its bounds cannot be readily used in place of theorem 1.\n\nIn terms of our experiments, we present results for the setting $g(a,b)=max{a+b,0}$ (ReLU) merely due to its popularity in practice. The exact same trends occur under the choice $g(a.b)=a*b$ (convolutional arithmetic circuits). We have added a footnote to the paper indicating this.\n\n[1] Cohen and Shashua. Convolutional Rectifier Networks as Generalized Tensor Decompositions. ICML 2016.", "We thank reviewer for the support!" ]
[ 7, 9, 8, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_S1JHhv6TW", "iclr_2018_S1JHhv6TW", "iclr_2018_S1JHhv6TW", "B1VRu6hbz", "ryLYFULlM", "SyEtTPclG" ]
iclr_2018_HkfXMz-Ab
Neural Sketch Learning for Conditional Program Generation
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a `"realistic" relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such *conditional program generation* are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on *program sketches*, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.
accepted-oral-papers
This paper presents a novel and interesting sketch-based approach to conditional program generation. I will say upfront that it is worth of acceptance, based on its contribution and the positivity of the reviews. I am annoyed to see that the review process has not called out the authors' lack of references to the decently body of existing work on generating structure on neural sketch programming and on generating under grammatical constraint. The authors' will need look no further than the proceedings of the *ACL conferences of the last few years to find papers such as: * Dyer, Chris, et al. "Recurrent Neural Network Grammars." Proceedings of NAACL-HLT (2016). * Kuncoro, Adhiguna, et al. "What Do Recurrent Neural Network Grammars Learn About Syntax?." Proceedings of EACL (2016). * Yin, Pengcheng, and Graham Neubig. "A Syntactic Neural Model for General-Purpose Code Generation." Proceedings of ACL (2017). * Rabinovich, Maxim, Mitchell Stern, and Dan Klein. "Abstract Syntax Networks for Code Generation and Semantic Parsing." Proceedings of ACL (2017). Or other work on neural program synthesis, with sketch based methods: * Gaunt, Alexander L., et al. "Terpret: A probabilistic programming language for program induction." arXiv preprint arXiv:1608.04428 (2016). * Riedel, Sebastian, Matko Bosnjak, and Tim Rocktäschel. "Programming with a differentiable forth interpreter." CoRR, abs/1605.06640 (2016). Likewise the references to the non-neural program synthesis and induction literature are thin, and the work is poorly situated as a result. It is a disappointing but mild failure of the scientific process underlying peer review for this conference that such comments were not made. The authors are encouraged to take heed of these comments in preparing their final revision, but I will not object to the acceptance of the paper on these grounds, as the methods proposed therein are truly interesting and exciting.
train
[ "Sy9Sau_xf", "rJ69A1Kxf", "SyhuQnyZz", "Bk5zgF6mf", "SJoUWF_bG", "HyUkWY_Wz", "ryIaJKO-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors introduce an algorithm in the subfield of conditional program generation that is able to create programs in a rich java like programming language. In this setting, they propose an algorithm based on sketches- abstractions of programs that capture the structure but discard program specific information that is not generalizable such as variable names. Conditioned on information such as type specification or keywords of a method they generate the method's body from the trained sketches. \n \nPositives:\n \n\t•\tNovel algorithm and addition of rich java like language in subfield of 'conditional program generation' proposed\n\t•\tVery good abstract: It explains high level overview of topic and sets it into context plus gives a sketch of the algorithm and presents the positive results.\n\t•\tExcellently structured and presented paper\n \n\t•\tMotivation given in form of relevant applications and mention that it is relatively unstudied\n\t•\tThe hypothesis/ the papers goal is clearly stated. It is introduced with 'We ask' followed by two well formulated lines that make up the hypothesis. It is repeated multiple times throughout the paper. Every mention introduces either a new argument on why this is necessary or sets it in contrast to other learners, clearly stating discrepancies.\n\t•\tExplanations are exceptionally well done: terms that might not be familiar to the reader are explained. This is true for mathematical aspects as well as program generating specific terms. Examples are given where appropriate in a clear and coherent manner\n\t•\tProblem statement well defined mathematically and understandable for a broad audience\n\t•\tMentioning of failures and limitations demonstrates a realistic view on the project\n\t•\tComplexity and time analysis provided\n\t•\tPaper written so that it's easy for a reader to implement the methods\n\t•\tDetailed descriptions of all instantiations even parameters and comparison methods\n\t•\tSystem specified\n\t•\tValidation method specified\n\t•\tData and repository, as well as cleaning process provided\n\t•\tEvery figure and plot is well explained and interpreted\n\t•\tLarge successful evaluation section provided\n\t•\tMany different evaluation measures defined to measure different properties of the project\n\t•\tDifferent observability modes\n\t•\tEvaluation against most compatible methods from other sources \n\t•\t Results are in line with hypothesis\n\t•\tThorough appendix clearing any open questions \n \nIt would have been good to have a summary/conclusion/future work section\n \nSUMMARY: ACCEPT. The authors present a very intriguing novel approach that in a clear and coherent way. The approach is thoroughly explained for a large audience. The task itself is interesting and novel. The large evaluation section that discusses many different properties is a further indication that this approach is not only novel but also very promising. Even though no conclusive section is provided, the paper is not missing any information.\n", "This paper aims to synthesize programs in a Java-like language from a task description (X) that includes some names and types of the components that should be used in the program. The paper argues that it is too difficult to map directly from the description to a full program, so it instead formulates the synthesis in two parts. First, the description is mapped to a \"sketch\" (Y) containing high level program structure but no concrete details about, e.g., variable names. Afterwards, the sketch is converted into a full program (Prog) by stochastically filling in the abstract parts of the sketch with concrete instantiations.\n\nThe paper presents an abstraction method for converting a program into a sketch, a stochastic encoder-decoder model for converting descriptions to trees, and rejection sampling-like approach for converting sketches to programs. Experimentally, it is shown that using sketches as an intermediate abstraction outperforms directly mapping to the program AST. The data is derived from an online repository of ~1500 Android apps, and from that were extracted ~150k methods, which makes the data very respectable in terms of realisticness and scale. This is one of the strongest points of the paper.\n\nOne point I found confusing is how exactly the Combinatorial Concretization step works. Am I correct in understanding that this step depends only on Y, and that given Y, Prog is conditionally independent of X? If this is correct, how many Progs are consistent with a typical Y? Some additional discussion of why no learning is required for the P(Prog | Y) step would be appreciated.\n\nI'm also curious whether using a stochastic latent variable (Z) is necessary. Would the approach work as well using a more standard encoder-decoder model with determinstic Z?\n\nSome discussion of Grammar Variational Autoencoder (Kusner et al) would probably be appropriate.\n\nOverall, I really like the fact that this paper is aiming to do program synthesis on programs that are more like those found \"in the wild\". While the general pattern of mapping a specification to abstraction with a neural net and then mapping the abstraction to a full program with a combinatorial technique is not necessarily novel, I think this paper adds an interesting new take on the pattern (it has a very different abstraction than say, DeepCoder), and this paper is one of the more interesting recent papers on program synthesis using machine learning techniques, in my opinion.\n", "This is a very well-written and nicely structured paper that tackles the problem of generating/inferring code given an incomplete description (sketch) of the task to be achieved. This is a novel contribution to existing machine learning approaches to automated programming that is achieved by training on a large corpus of Android apps. The combination of the proposed technique and leveraging of real data are a substantial strength of the work compared to many approaches that have come previously.\n\nThis paper has many strengths:\n1) The writing is clear, and the paper is well-motivated\n2) The proposed algorithm is described in excellent detail, which is essential to reproducibility\n3) As stated previously, the approach is validated with a large number of real Android projects\n4) The fact that the language generated is non-trivial (Java-like) is a substantial plus\n5) Good discussion of limitations\n\nOverall, this paper is a valuable addition to the empirical software engineering community, and a nice break from more traditional approaches of learning abstract syntax trees.", "We have uploaded the final version of the paper making the following changes:\n1. Clarification about learning the distribution P(Prog | Y).\n2. Discussion about the related work on Grammar VAE (Kusner et al).\n3. Addition of a conclusion section to the paper.", "Thank you for your feedback about the paper. We will add a conclusion section to the final version of the paper.", "Thank you for your feedback about the paper. We answer your specific questions below.\n\nQuestion: Am I correct in understanding that [Combinatorial Concretization] step depends only on Y, and that given Y, Prog is conditionally independent of X? If this is correct, how many Progs are consistent with a typical Y?\n\nAnswer: Yes, Prog is conditionally independent of X given a sketch Y. In theory, there may be an infinite number of Progs for every Y. A simple example is two Progs that differ only in variable names, thereby corresponding to the same Y; for another example, there can be very many expressions that match the type of an API method argument. However, in practice, we use certain heuristics to limit the space of Progs from a given Y (these heuristics are abstractly captured by the distribution P(Prog | Y). In particular, these heuristics prioritize smaller, simpler programs over complex ones, and name local variables in a canonical way.\n\nWhile we didn't collect this data systematically, our experience with the system suggests that under the heuristics actually implemented in it, a typical Y leads to only ~5-10 distinct Progs in our experiments. We will collect this data more thoroughly and add it to the paper. \n\nQuestion: Some additional discussion of why no learning is required for the P(Prog | Y) step would be appreciated.\n\nAnswer: In principle, this step could be made data-driven; however, the resulting learning problem would be very difficult. This is because a single sketch used for training can correspond to many training programs that only differ in superficial details (for example local variable names). Learning to decide which differences between programs are superficial and which are not, solely by looking at the syntax of programs, is hard. In contrast, our approach of heuristically choosing P(Prog | Y) utilizes our domain knowledge of language semantics (for example, that local variable names do not matter, and that some algebraic expressions are semantically equivalent). This knowledge allows us to limit the set of programs that we end up generating. We will clarify this in more detail in the paper. \n\n\nQuestion: I'm also curious whether using a stochastic latent variable (Z) is necessary. Would the approach work as well using a more standard encoder-decoder model with deterministic Z?\n\nAnswer: The randomness associated with the latent variable Z serves as a way to regularize the learning process (a similar argument is made in the context of VAEs for the stochastic latent variable used during VAE learning). We were concerned that without the stochasticity (i.e., with a deterministic Z), training the model would be more likely to be affected by overfitting. Practically speaking, the stochasticity also serves as a way to ensure that we can generate a wide variety of possible programs from a given X. If Z was not random, a particular set of labels X will always result in exactly the same value of Z.\n\nComment: Some discussion of Grammar Variational Autoencoder (Kusner et al) would probably be appropriate.\n\nAnswer: Kusner et al’s work proposes a VAE for context-free grammars. Being an auto-encoder it is a generative model, but it is not a conditional model such as ours. In their application towards synthesizing molecular structures, given a particular molecular structure, their model can be used to search the latent space for similar valid structures. In our setting, however, we are not given a sketch but only labels about the sketch, and our task is learn a conditional model that can predict a whole sketch given labels.\n\nWe will add the discussion about this work in the final version of the paper.\n", "Thank you for your feedback about the paper." ]
[ 7, 8, 7, -1, -1, -1, -1 ]
[ 2, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HkfXMz-Ab", "iclr_2018_HkfXMz-Ab", "iclr_2018_HkfXMz-Ab", "iclr_2018_HkfXMz-Ab", "Sy9Sau_xf", "rJ69A1Kxf", "SyhuQnyZz" ]
iclr_2018_Hk99zCeAb
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
accepted-oral-papers
The main contribution of the paper is a technique for training GANs which consists in progressively increasing the resolution of generated images by gradually enabling layers in the generator and the discriminator. The method is novel, and outperforms the state of the art in adversarial image generation both quantitatively and qualitatively. The evaluation is carried out on several datasets; it also contains an ablation study showing the effect of contributions (I recommend that the authors follow the suggestions of AnonReviewer2 and further improve it). Finally, the source code is released which should facilitate the reproducibility of the results and further progress in the field. AnonReviewer1 has noted that the authors have revealed their names through GitHub, thus violating the double-blind submission requirement of ICLR; if not for this issue, the reviewer’s rating would have been 8. While these concerns should be taken very seriously, I believe that in this particular case the paper should still be accepted for the following reasons: 1) the double blind rule is new for ICLR this year, and posting the paper on arxiv is allowed; 2) the author list has been revealed through the supplementary material (Github page) rather than the paper itself; 3) all reviewers agree on the high impact of the paper, so having it presented and discussed at the conference would be very useful for the community.
train
[ "BJ8NesygM", "rJ205zPlG", "S15uG36lG", "B1oIUPX4M", "B1N1nJxGf", "r1YixQVZM", "Bk_JafhgM", "rk0s9ajgM", "rJGV53FyM", "SJqOmCHJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "The paper describes a number of modifications of GAN training that enable synthesis of high-resolution images. The modifications also support more automated longer-term training, and increasing variability in the results.\n\nThe key modification is progressive growing. First, a GAN is trained for image synthesis at very low resolution. Then a layer that refines the resolution is progressively faded in. (More accurately, a corresponding pair of layers, one in the generation and one in the discriminator.) This progressive fading in of layers is repeated, one octave at a time, until the desired resolution is reached.\n\nAnother modification reported in the paper is a simple parameter-free minibatch summary statistic feature that is reported to increase variation. Finally, the paper describes simple schemes for initialization and feature normalization that are reported to be more effective than commonly used initializers and batchnorm.\n\nIt's a very nice paper. It does share the \"bag of tricks\" nature of many GAN papers, but as such it is better than most of the lot. I appreciate that some of the tricks actually simplify training, and most are conceptually reasonable. The paper is also very well written.\n\nMy quibbles are minor. First, I would discuss [Huang et al., CVPR 2017] and the following paper more prominently:\n\n[Zhang et al., ICCV 2017] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.\n\nI couldn't find a discussion of [Huang et al., CVPR 2017] at all, although it's in the bibliography. (Perhaps I overlooked the discussion.) And [Zhang et al., ICCV 2017] is quite closely related, since it also tackles high-resolution synthesis via multi-scale refinement. These papers don't diminish the submission, but they should be clearly acknowledged and the contribution of the submission relative to these prior works should be discussed.\n\nAlso, [Rabin et al., 2011] is cited in Section 5 but I couldn't find it in the bibliography.\n", "Before the actual review I must mention that the authors provide links in the paper that immediately disclose their identity (for instance, the github link). This is a violation of double-blindness, and in any established double-blind conference this would be a clear reason for automatic rejection. In case of ICLR, double-blindness is new and not very well described in the call for papers, so I guess it’s up to ACs/PCs to decide. I would vote for rejection. I understand in the age of arxiv and social media double-blindness is often violated in some way, but here the authors do not seem to care at all. \n\n—\n\nThe paper proposes a collections of techniques for improving the performance of Generative Adversarial Networks (GANs). The key contribution is a principled multi-scale approach, where in the process of training both the generator and the discriminator are made progressively deeper and operate on progressively larger images. The proposed version of GANs allows generating images of high resolution (up to 1024x1024) and high visual quality.\n\nPros:\n1) The visual quality of the results is very good, both on faces and on objects from the LSUN dataset. This is a large and clear improvement compared to existing GANs.\n2) The authors perform a thorough quantitative evaluation, demonstrating the value of the proposed approach. They also introduce a new metric - Sliced Wasserstein Distance.\n3) The authors perform an ablation study illustrating the value of each of the proposed modifications.\n\nCons:\n1) The paper only shows results on image generation from random noise. The evaluation of this task is notoriously difficult, up to impossible (Theis et al., ICLR 2016). The authors put lots of effort in the evaluation, but still:\n- it is unclear what is the average quality of the samples - a human study might help\n- it is unclear to which extent the images are copied from the training set. The authors show some nearest neighbors from the training set, but very few and in the pixel space, which is known to be pointless (again, Theis et al. 2016). Interpolations in the latent space is a good experiment, but in fact the interpolations do not look that great on LSUN\n- it is unclear if the model covers the full diversity of images (mode collapse)\nIt would be more convincing to demonstrate some practical results, for instance inpainting, superresolution, unsupervised or semi-supervised learning, etc.\n2) The general idea of multi-scale generation is not new, and has been investigated for instance in LapGAN (Denton et al., ICLR 2015) or StackGAN (Zhang et al., ICCV2017, arxiv 2017). The authors should properly discuss this. \n3) The authors mention “unhealthy competition” between the discriminator and the generator several times, but it is not quite clear what exactly they mean - a more specific definition would be useful.\n\n(This conclusion does not take the anonymity violation into account. Because of the violation I believe the paper should be rejected. Of course I am open to discussions with ACs/PCs.) \nTo conclude, the paper demonstrates a breakthrough in the quality and resolution of images generated with a GAN. The experimental evaluation is thorough, to the degree allowed by the poorly defined task of generating images from random noise. Results on some downstream tasks, such as inpainting, image processing or un-/semi-supervised learning would make the paper more convincing. Still, the paper should definitely be accepted for publication. Normally, I would give the paper a rating of 8.", "This paper proposes a number of ideas for improving GANs for image generation, highlighting in particular a curriculum learning strategy to progressively increase the resolution of the generated images, resulting in GAN generators capable of producing samples with unprecedented resolution and visual fidelity.\n\n\nPros:\n\nThe paper is well-written and the results speak for themselves! Qualitatively they’re an impressive and significant improvement over previous results from GANs and other generative models. The latent space interpolations shown in the video (especially on CelebA-HQ) demonstrate that the generator can smoothly transition between modes and convince me that it isn’t simply memorizing the training data. (Though I think this issue could be addressed a bit better -- see below.) Though quantification of GAN performance is difficult and rapidly evolving, there is a lot of quantitative analysis all pointing to significant improvements over previous methods.\n\nA number of new tricks are proposed, with the ablation study (tab 1 + fig 3) and learning curves (fig 4) giving insight into their effects on performance. Though the field is moving quickly, I expect that several of these tricks will be broadly adopted in future work at least in the short to medium term.\n\nThe training code and data are released.\n\n\nCons/Suggestions:\n\nIt would be nice to see overfitting addressed and quantified in some way. For example, the proposed SWD metric could be recomputed both for the training and for a held-out validation/test set, with the difference between the two scores measuring the degree of overfitting. Similarly, Danihelka et al. [1] show that an independently trained Wasserstein critic (with one critic trained on G samples vs. train samples, and another trained on G samples vs. val samples) can be used to measure overfitting. Another way to go could be to generate a large number of samples and show the nearest neighbor for a few training set samples and for a few val set samples. Doing this in pixel space may not work well especially at the higher resolutions, but maybe a distance function in the space of some high-level hidden layer of a trained discriminator could show good semantic nearest neighbors.\n\nThe proposed SWD metric is interesting and computationally convenient, but it’s not clear to me that it’s an improvement over previous metrics like the independent Wasserstein critic proposed in [1]. In particular the use of 7x7 patches would seem to limit the metric’s ability to capture the extent to which global structure has been learned, even though the patches are extracted at multiple levels of the Laplacian pyramid.\n\nThe ablation study (tab 1 + fig 3) leaves me somewhat unsure which tricks contribute the most to the final performance improvement over previous work. Visually, the biggest individual improvement is easily when going from (c) to (d), which adds the “Revised training parameters”, with the improvement from (a) to (b) which adds the highlighted progressive training schedule appearing relatively minor in comparison. However, I realize the former large improvement is due to the arbitrary ordering of the additions in the ablation study, with the small minibatch addition in (c) crippling results on its own. Ablation studies with large numbers of tweaks are always difficult and this one is welcome and useful despite the ambiguity.\n\nOn a related note, it would be nice if there were more details on the “revised training hyperparameters” improvement ((d) in the ablation study) -- which training hyperparameters are adjusted, and how?\n\n“LAPGAN” (Denton et al., 2015) should be cited as related work: LAPGAN’s idea of using a separate generator/discriminator at each level of a Laplacian pyramid conditioned on the previous level is quite relevant to the progressive training idea proposed here. Currently the paper is only incorrectly cited as “DCGAN” in a results table -- this should be fixed as well.\n\n\nOverall, this is a well-written paper with striking results and a solid effort to analyze, ablate, and quantify the effect of each of the many new techniques proposed. It’s likely that the paper will have a lot of impact on future GAN work.\n\n\n[1] Danihelka et al., “Comparison of Maximum Likelihood and GAN-based training of Real NVPs” https://arxiv.org/abs/1705.05263", "I totally agree with the last point: it would have been great if the organizers provided a more detailed CFP and recommended best practices.\n\nHowever, I disagree with the other two points:\nFirst, I know arxiv, talks, blogs, etc are permitted. But directly linking the author list from the paper is generally not. \nSecond, there are ways to host data anonymously and many ICLR authors (including myself) found some. If in doubt, the right way would be to ask the organizers, not breach the anonymity directly.\n\nIn the end, the decision is on ACs and PCs.", "We have uploaded a new revision of the paper, addressing the concerns brought up in the reviews. The detailed list of changes is as follows:\n\n- Revise the nearest neighbors in Figure 10 by using VGG feature-space distance and showing 5 best matches for each generated image.\n- Report average CIFAR10 inception score over 10 random initializations in Table 3, in addition to the highest achieved score.\n- Add discussion of [Denton et al. 2015], [Huang et al. 2016], and [Zhang et al. 2017] in Section 2.\n- Fix [Anonymous 2017], [Rabin et al. 2011], and [Radford et al. 2015] references.\n- Update \"(h) Converged\" case in Table 1 and Figure 3, as well as LSUN images in Figures 12-17 using networks that were trained longer.\n- Report SWD numbers for CelebA-HQ and LSUN categories in Figures 11-17.\n- Typo fixes and minor clarifications.\n\nWe would like to thank the reviewers again for useful feedback.", "We will fix all the references and add related discussion to the paper. \n\nWe have, in fact, obtained more sensible nearest-neighbor results using a feature-space distance metric for image comparison. We will update Fig. 10 accordingly and include multiple nearest neighbors for each generated image. The conclusion still stands that the generated images have no obvious source images in the training set. The hyperparameter changes related to Table 1 (d) are listed in Appendix A.2. \n\nWe acknowledge R1’s concerns about anonymity and feel a few words are in order.\n\nFirst, the call for papers explicitly states that arXiv and other such public forums are permitted. While we agree that full anonymity is valuable, we feel that one cannot realistically expect to achieve it perfectly, because so many potential reviewers subscribe to the arXiv announce list and articles from that list are inevitably discussed in social media.\n\nSecond, the OpenReview submission site does not allow supplemental videos or code, forcing one to use services like YouTube and GitHub, neither of which allows anonymous submissions. In our opinion, that leaves two possibilities: 1) fake accounts, or 2) breach of anonymity. We thought about this long and hard and chose #2 because #1 seems fraught with many more problems -- and would perhaps also seem like a strange requirement. While we anonymized the paper and the video to the extent possible within these bounds, we regrettably forgot a full author list in the readme at GitHub. We sincerely apologize for this oversight.\n\nIn order to avoid this kind of awkwardness in the future, we feel that explicit guidance in the CFP -- including suggested best practices for submitting videos, code, and data -- would be helpful and greatly facilitate the review process.", "We apologize that the source code is somewhat convoluted in this regard.\n\nOur implementation performs weight initialization in two distinct phases. It first initializes the weights using Lasagne's standard He initializer and then rescales them in accordance to Section 4.1. For example, consider the following line in network.py (http://bit.ly/2zykV3P#L471):\n\n471 net = PN(BN(WS(Conv2DLayer(net, name='G1b', num_filters=nf(1), filter_size=3, pad=1, nonlinearity=act, W=iact))))\n\nHere, we create a standard Conv2DLayer and apply equalized learning rate (WS) as well as pixelwise feature vector normalization (PN) on top of it. Note that batch normalization (BN) is disabled in most of our experiments. When the Conv2DLayer is first instantiated, the weights are initialized according to W=iact, which in turn is defined as lasagne.init.HeNormal('relu') on lines 459 and 32. We apply equalized learning rate by latching a custom WScaleLayer (line 278) onto the Conv2DLayer. When the WScaleLayer is instantiated, it estimates the elementwise standard deviation of the weights and normalizes them accordingly:\n\n281 W = incoming.W.get_value()\n282 scale = np.sqrt(np.mean(W ** 2))\n283 incoming.W.set_value(W / scale)\n\nThe value on line 281 corresponds to $\\hat{w}_i$ in the paper, line 282 corresponds to $c$, and line 283 to $w_i$. In other words, this part of the code undoes the effect of He's initializer and brings the weights back to trivial N(0,1) initialization.", "In Section 4.1 you mention that you are initializing the network weights by sampling from the normal distribution. In your code, it appears you are using the stock Lasagne weight initialization, which uses the Xavier Glorot uniform distribution. Or has this changed in some newer version of Lasagne?", "1.\nYes, assuming that the question refers to the exponential running average that we use for visualizing the generated images.\n\nWe have observed that the best results are generally obtained using a relatively high learning rate, which in turn leads to significant variation in terms of network weights between consecutive training iterations. Any instantaneous snapshot of the generator is likely to be slightly off or exaggerated in terms of various image features such as color, brightness, sharpness, shape of the mouth, amount of hair, color of the eyes, etc. The exponential running average reduces this variation, leading to considerably more consistent results.\n\nIntuitively speaking, we can say that the generator and discriminator are constantly exploring a large neighborhood of different solutions around the current average solution, even though the average solution itself evolves relatively slowly. According to our experiments, such exploration seems to be highly beneficial in terms of eventually converging towards a good local optimum.\n\n2.\nWe have tried increasing the minibatch size in our CIFAR-10 runs, but we have not observed an increase in the inception scores.\n\nThe performance degradation associated with small minibatch size is largely limited to configurations that rely heavily on batch normalization (rows a-c in Table 1). Perhaps surprisingly, we have observed that smaller minibatches actually produce slightly better results in configurations where batch normalization is not present (rows d-h).\n\n3.\nWe did explore different network architectures in the early stages of the project. In general, it does not seem to make a big difference whether we start at 2x2, 4x4, 8x8, or 16x16 resolution. We chose 4x4 mainly because it is the most natural fit for our specific network architecture. We have also observed that it is beneficial to have roughly the same structure and capacity in both networks, as well as matching upsampling and downsampling operators.\n", "1. Was performance degrading actually observed when smoothing was not used?\n\n2. It seems that the reported Inception score of CIFAR is based on minibatch size of 16 because you wanted to show that the performance degrading of small minibatch could be remedied by that. Have you found a significant increase in score when the size is 64 and when you use all the techniques you used to remedy the aforementioned performance degrading? If so, I think the degree of the increase in score would indicate the size of this bottleneck. \n\n3. Did you do any other architecture exploration such as beginning from first block having 2x2 output instead of 4x4?" ]
[ 8, 1, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hk99zCeAb", "iclr_2018_Hk99zCeAb", "iclr_2018_Hk99zCeAb", "r1YixQVZM", "iclr_2018_Hk99zCeAb", "iclr_2018_Hk99zCeAb", "rk0s9ajgM", "iclr_2018_Hk99zCeAb", "SJqOmCHJf", "iclr_2018_Hk99zCeAb" ]
iclr_2018_H1tSsb-AW
Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.
accepted-oral-papers
The reviewers are satisfied that this paper makes a good contribution to policy gradient methods.
train
[ "ryf-_2ugf", "S1VwmoFxz", "rJaGVZ5lz", "rkrQFmjmG", "HkYvtXoQf", "HyKwumiXG", "rJJhvXiQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents methods to reduce the variance of policy gradient using an action dependent baseline. Such action dependent baseline can be used in settings where the action can be decomposed into factors that are conditionally dependent given the state. The paper:\n(1) shows that using separate baselines for actions, each of which can depend on the state and other actions is bias-free\n(2) derive the optimal action-dependent baseline, showing that it does not degenerate into state-only dependent baseline, i.e. there is potentially room for improvement over state-only baselines.\n(3) suggests using marginalized action-value (Q) function as a practical baseline, generalizing the use of value function in state-only baseline case.\n(4) suggests using MC marginalization and also using the \"average\" action to improve computational feasibility\n(5) combines the method with GAE techniques to further improve convergence by trading off bias and variance\n\nThe suggested methods are empirically evaluated on a number of settings. Overall action-dependent baseline outperform state-only versions. Using a single average action marginalization is on par with MC sampling, which the authors attribute to the low quality of the Q estimate. Combining GAE shows that a hint of bias can be traded off with further variance reduction to further improve the performance.\n\nI find the paper interesting and practical to the application of policy gradient in high dimensional action spaces with some level of conditional independence present in the action space. In light of such results, one might change the policy space to enforce such structure.\n\nNotes:\n- Elaborate further on the assumption made in Eqn 9. Does it mean that the actions factors cannot share (too many) parameters in the policy construction, or that shared parameters can only be applied to the state?\n- Eqn 11 should use \\simeq\n- How can the notion of average be extended to handle multi-modal distributions, or categorical or structural actions? Consider expanding on that in section 4.5.\n- The discussion on the DAG graphical model is lacking experimental analysis (where separate baselines models are needed). How would you train such baselines?\n- Figure 4 is impossible to read in print. The fonts are too small for the numbers and the legends.\n", "In this paper, the authors investigate variance reduction techniques for agents with multi-dimensional policy outputs, in particular when they are conditionally independent ('factored'). With the increasing focus on applying RL methods to continuous control problems and RTS type games, this is an important problem and this technique seems like an important addition to the RL toolbox. The paper is well written, the method is easy to implement, and the algorithm seems to have clear positive impact on the presented experiments.\n\n- The derivations in pages 4-6 are somewhat disconnected from the rest of the paper: the optimal baseline derivation is very standard (even if adapted to the slightly different situation situated here), and for reasons highlighted by the authors in this paper, they are not often used; the 'marginalized' baseline is more common, and indeed, the authors adopt this one as well. In light of this (and of the paper being quite a bit over the page limit)- is this material (4.2->4.4) mostly not better suited for the appendix? Same for section 4.6 (which I believe is not used in the experiments).\n\n- The experimental section is very strong; regarding the partial observability experiments, assuming actions are here factored as well, I could see four baselines \n(two choices for whether the baseline has access to the goal location or not, and two choices for whether the baseline has access to the vector $a_{-i}$). It's not clear which two baselines are depicted in 5b - is it possible to disentangle the effect of providing $a_{-i}$ and the location of the hole to the baseline?\n\n(side note: it is an interesting idea to include information not available to the agent as input to the baseline though it does feel a bit 'iffy' ; the agent requires information to train, but is not provided the information to act. Out of curiosity, is it intended as an experiment to verify the need for better baselines? Or as a 'fair' training procedure?)\n\n- Minor: in equation 2- is the correct exponent not t'? Also since $\\rho_\\pi$ is define with a scaling $(1-\\gamma)$ (to make it an actual distribution), I believe the definition of $\\eta$ should also be multiplied by $(1-\\gamma)$ (as well as equation 2).", "The paper proposes a variance reduction technique for policy gradient methods. The proposed approach justifies the utilization of action-dependent baselines, and quantifies the gains achieved by it over more general state-dependent or static baselines.\n\n\nThe writing and organization of the paper is very well done. It is easy to follow, and succinct while being comprehensive. The baseline definition is well-motivated, and the benefits offered by it are quantified intuitively. There is only one mostly minor issues with the algorithm development and the experiments need to be more polished. \n\nFor the algorithm development, there is an relatively strong assumption that z_i^T z_j = 0. This assumption is not completely unrealistic (for example, it is satisfied if completely separate parts of a feature vector are used for actions). However, it should be highlighted as an assumption, and it should be explicitly stated as z_i^T z_j = 0 rather than z_i^T z_j approx 0. Further, because it is relatively strong of an assumption, it should be discussed more thoroughly, with some explicit examples of when it is satisfied.\n\nOtherwise, the idea is simple and yet effective, which is exactly what we would like for our algorithms. The paper would be a much stronger contribution, if the experiments could be improved. \n- More details regarding the experiments are desirable - how many runs were done, the initialization of the policy network and action-value function, the deep architecture used etc.\n- The experiment in Figure 3 seems to reinforce the influence of \\lambda as concluded by the Schulman et. al. paper. While that is interesting, it seems unnecessary/non-relevant here, unless performance with action-dependent baselines with each value of \\lambda is contrasted to the state-dependent baseline. What was the goal here?\n- In general, the graphs are difficult to read; fonts should be improved and the graphs polished. \n- The multi-agent task needs to be explained better - specifically how is the information from the other agent incorporated in an agent's baseline?\n- It'd be great if Plot (a) and (b) in Figure 5 are swapped.\n\nOverall I think the idea proposed in the paper is beneficial. Better discussing the strong theoretical assumption should be incorporated. Adding the listed suggestions to the experiments section would really help highlight the advantage of the proposed baseline in a more clear manner. Particularly with some clarity on the experiments, I would be willing to increase the score. \n\nMinor comments:\n1. In Equation (28) how is the optimal-state dependent baseline obtained? This should be explicitly shown, at least in the appendix. \n2. The listed site for videos and additional results is not active.\n3. Some typos\n- Section 2 - 1st para - last line: \"These methods are therefore usually more sample efficient, but can be less stable than critic-based methods.\".\n- Section 4.1 - Equation (7) - missing subscript i for b(s_t,a_t^{-i}) \n- Section 4.2 - \\hat{Q} is just Q in many places", "Thank you for the clear and encouraging review! We have addressed your key points below and incorporated the discussion into the revised article.\n\n> The derivations in pages 4-6 are somewhat disconnected from the rest of the paper: the\n> optimal baseline derivation is very standard (even if adapted to the slightly different\n> situation situated here), and for reasons highlighted by the authors in this paper, they \n> are not often used; the 'marginalized' baseline is more common, and indeed, the authors\n> adopt this one as well. In light of this (and of the paper being quite a bit over the page \n> limit)- is this material (4.2->4.4) mostly not better suited for the appendix? Same for \n> section 4.6 (which I believe is not used in the experiments).\n\nThank you for your suggestion. We have moved the derivation and the general actions exposition to Appendices B-D and E, respectively, and have referenced only the important conclusions in the main text. \n\n> The experimental section is very strong; regarding the partial observability experiments, \n> assuming actions are here factored as well, I could see four baselines (two choices for \n> whether the baseline has access to the goal location or not, and two choices for whether\n> the baseline has access to the vector $a_{-i}$). It's not clear which two baselines are \n> depicted in 5b - is it possible to disentangle the effect of providing $a_{-i}$ and the \n> location of the hole to the baseline?\n>\n> (side note: it is an interesting idea to include information not available to the agent as \n> input to the baseline though it does feel a bit 'iffy' ; the agent requires information to\n> train, but is not provided the information to act. Out of curiosity, is it intended as an\n> experiment to verify the need for better baselines? Or as a 'fair' training procedure?)\n\nThank you for this observation. We have updated the experiments to compare baseline1=state+action+goal vs baseline2=state+action, and have generated results for more random seeds (5). Similarly, the multi-agent experiment is comparing whether the baseline has access to the state of other agents or not, in addition to a single agent’s state+action. Our primary goal in these experiments was to see if providing additional information can reduce variance and help train faster. At test time, both policies are required to act based on the same information, and hence this is a ‘fair’ procedure. Similar approaches of using additional information during training time have been employed in recent related works [1,2], which we have referenced in the paper.\n\n[1] Lowe et al. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments, 2017.\n[2] Levine, et al. End-to-end training of deep visuomotor policies, 2016.\n\n> Minor: in equation 2- is the correct exponent not t'? Also since $\\rho_\\pi$ is define with\n> a scaling $(1-\\gamma)$ (to make it an actual distribution), I believe the definition of \n> $\\eta$ should also be multiplied by $(1-\\gamma)$ (as well as equation 2).\n\nThank you for the detailed questions and comments! The correct exponent is $t’-t$ because what is being computed is the cumulative discounted return starting from time t. Thank you also for catching our error with the $(1-\\gamma)$. We have corrected this in the manuscript in Section 3.3.", "Thank you for the thorough review! We have updated the paper based on your suggestions. We have added discussions on categorical distributions to Section 4.4, sub-section 2 and discussions on the generic DAG graphical model to Appendix E, last 3 paragraphs. We have addressed your key point below and incorporated the discussion into the revised article.\n\n> Elaborate further on the assumption made in Eqn 9. Does it mean that the actions \n> factors cannot share (too many) parameters in the policy construction, or that shared\n> parameters can only be applied to the state?\n\nThe assumption made in Eqn 9 is primarily for the theoretical analysis to be clean, and is not required to run the algorithm in practice. In particular, even without this assumption, the proposed baseline is bias-free. When the assumption holds, the optimal action-dependent baseline has a clean form which we can analyze thoroughly. As noted by the reviewer, the assumption is not very unrealistic. Some examples where these assumptions hold include multi-agent settings where the policies are conditionally independent by construction, cases where the policy acts based on independent components [1] of the observation space, and cases where different function approximators are used to control different actions or synergies [2,3] without weight sharing.\n\n[1] Y. Cao et al. Motion Editing With Independent Component Analysis, 2007.\n[2] E. Todorov, Z. Ghahramani, Analysis of the synergies underlying complex hand manipulation, 2004.\n[3] E. Todorov, W. Li, X. Pan, From task parameters to motor synergies: A hierarchical framework for approximately optimal control of redundant manipulators, 2005.", "Thank you for the thoughtful review! These suggestions and questions are reflected in the updated article. We have added the derivation of the optimal-state dependent baseline (based on Greensmith, et al., 2004) in Appendix A, and a video has since been uploaded to the site. We have added experiment details to Appendix G, and have clarified baselines for multi-agent settings in Section 5 (Figure 4b). Thank you also for noting the typos! We have updated the manuscript to reflect these changes and included a clarification of our notation in Section 3.1, paragraph 1. We have addressed your key points below and incorporated the discussion into the new revision of the article.\n\n> For the algorithm development, there is an relatively strong assumption that z_i^T z_j = 0. This\n> assumption is not completely unrealistic (for example, it is satisfied if completely separate parts\n> of a feature vector are used for actions). However, it should be highlighted as an assumption, \n> and it should be explicitly stated as z_i^T z_j = 0 rather than z_i^T z_j approx 0. Further, because \n> it is relatively strong of an assumption, it should be discussed more thoroughly, with some \n> explicit examples of when it is satisfied.\n\nThank you for this very important observation. We have revised the manuscript to state this assumption explicitly and have also provided examples where it is satisfied (in Section 4.2, paragraph 1). We note however that this assumption is primarily for the theoretical analysis to be clean, and is not required to run the algorithm in practice. In particular, even without this assumption, the proposed baseline is bias-free. When the assumption holds, the optimal action-dependent baseline has a clean form which we can analyze thoroughly. As noted by the reviewer, the assumption is not very realistic. Some examples where these assumptions hold include multi-agent settings where the policies are conditionally independent by construction, cases where the policy acts based on independent components [1] of the observation space, and cases where different function approximators are used to control different actions or synergies [2,3] without weight sharing.\n\n[1] Y. Cao et al. Motion Editing With Independent Component Analysis, 2007.\n[2] E. Todorov, Z. Ghahramani, Analysis of the synergies underlying complex hand manipulation, 2004.\n[3] E. Todorov, W. Li, X. Pan, From task parameters to motor synergies: A hierarchical framework for approximately optimal control of redundant manipulators, 2005.\n\n> The experiment in Figure 3 seems to reinforce the influence of \\lambda as concluded by the \n> Schulman et. al. paper. While that is interesting, it seems unnecessary/non-relevant here, \n> unless performance with action-dependent baselines with each value of \\lambda is contrasted\n> to the state-dependent baseline. What was the goal here?\n\nThank you for the great question. Our goal was to emphasize that one does not lose out on temporal difference based variance reduction approaches like GAE, which are complimentary to reducing variance caused by high dimensionality of action space considered in this work. Considering the page limit and your suggestion, we have moved this discussion to Appendix F.\n", "We thank reviewers for their time and thoughtful feedback.\n\nWe have updated the submission: We have moved the derivations and extensions to the appendix, added a summarizing algorithms section. We have improved notation throughout the paper, improved the consistency of the plots, clarified experiment details, and resolved ambiguities. We answer specific questions raised in the reviews by separately replying to each of them." ]
[ 7, 8, 6, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1tSsb-AW", "iclr_2018_H1tSsb-AW", "iclr_2018_H1tSsb-AW", "S1VwmoFxz", "ryf-_2ugf", "rJaGVZ5lz", "iclr_2018_H1tSsb-AW" ]
iclr_2018_BkisuzWRW
Zero-Shot Visual Imitation
The current dominant paradigm for imitation learning relies on strong supervision of expert actions to learn both 'what' and 'how' to imitate. We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. In our framework, the role of the expert is only to communicate the goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after seeing just a sequence of images demonstrating the desired task. Our method is 'zero-shot' in the sense that the agent never has access to expert actions during training or for the task demonstration at inference. We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation with a Baxter robot and navigation in previously unseen office environments with a TurtleBot. Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance. Videos, models, and more details are available at https://pathak22.github.io/zeroshot-imitation/.
accepted-oral-papers
The authors have proposed a method for imitating a given control trajectory even if it is sparsely sampled. The method relies on a parametrized skill function and uses a triplet loss for learning a stopping metric and for a dynamics consistency loss. The method is demonstrated with real robots on a navigation task and a knot-tying task. The reviewers agree that it is a novel and interesting alternative to pure RL which should inspire good discussion at the conference.
train
[ "SylSZ-5gf", "rJ4XrD5eG", "BJdG429gz", "H1MXa4xmG", "Sk3SYJO7f", "HJL-IlS7z", "HJRuTbN7f", "S1vwJrgQM", "ryXU04emf", "Hko0F4l7M", "SkJHO4e7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose an approach for zero-shot visual learning. The robot learns inverse and forward models through autonomous exploration. The robot then uses the learned parametric skill functions to reach goal states (images) provided by the demonstrator. The “zero-shot” refers to the fact that all learning is performed before the human defines the task. The proposed method was evaluated on a mobile indoor navigation task and a knot tying task. \n\nThe proposed approach is well founded and the experimental evaluations are promising. The paper is well written and easy to follow. \n\nI was expecting the authors to mention “goal emulation” and “distal teacher learning” in their related work. These topics seem sufficiently related to the proposed approach that the authors should include them in their related work section, and explain the similarities and differences. \n\nLearning both inverse and forward models is very effective. How well does the framework scale to more complex scenarios, e.g., multiple types of manipulation together? Do you have any intuition for what kind of features or information the networks are capturing? For the mobile robot, is the robot learning some form of traversability affordances, e.g., recognizing actions for crossings, corners, and obstacles? The authors should consider a test where the robot remains stationary with a fixed goal, but obstacles are move around it to see how it affects the selected action distributions.\n\nHow much can change between the goal images and the environment before the system fails? In the videos, it seems that the people and chairs are always in the same place. I could imagine a network learning to ignore features of objects that tend to wander over time. The authors should consider exploring and discussing the effects of adding/moving/removing objects on the performance. \n\nI am very happy to see experimental evaluations on real robots, and even in two different application domains. Including videos of failure cases is also appreciated. The evaluation with the sequence of checkpoints was created by using every fifth image. How does the performance change with the number of frames between checkpoints? In the videos, it seems like the robot could get a slightly better view if it took another couple of steps. I assume this is an artifact of the way the goal recognizer is trained. For the videos, it may be useful to indicate when the goal is detected, and then let it run a couple more steps and stop for a second. It is difficult to compare the goal image and the video otherwise. ", "Summary:\nThe authors present a paper about imitation of a task presented just during inference, where the learning is performed in a completely self-supervised manner.\nDuring training, the agent explores by itself related (but different) tasks, learning a) how actions affect the world state, b) which action to perform given the previous action and the world state, and c) when to stop performing actions. This learning is done without any supervision, with a loss that tries to predict actions which result in the state achieved through self exploration (forward consistency loss).\nDuring testing, the robot is presented with a sequence of goals in a related but different task. Experiments show that the system achieves a better performance than different subparts of the system (through an ablation study), state of the art and common open source systems.\n\nPositive aspects:\nThe paper is well written and clear to understand. Since this is not my main area of research I cannot judge its originality in a completely fair way, but it is original AFAIK. The idea of learning the basic relations between actions and state through self exploration is definitely interesting.\nThis line of work is specially relevant since it attacks one of the main bottlenecks in learning complex tasks, which is the amount of supervised examples.\nThe experiments show clearly that a) the components of the proposed pipeline are important since they outperform ablated versions of it and b) the system is better than previous work in those tasks\n\nNegative aspects:\nMy main criticism to the paper is that the task learning achieved through self exploration seems relatively shallow. From the navigation task, it seems like the system mainly learns a discover behavior that is better than random motion. It definitely does not seem able to learn higher level concepts like certain scenes being more likely to be close to each other than others (e.g. it is likely to find an oven in the same room as a kitchen sink but not in a toilet). It is not clear whether this is achievable by the current system even with more training data.\nAnother aspect that worries me about the system is how it can be extended to higher dimensional action spaces. Extending control laws through self-exploration under random disturbances has been studied in character control (e.g. \"Domain of Attraction Expansion for Physics-based Character Control\" by Borno et al.), but the dimensionality of the problem makes this exploration very expensive (even for short time frames, and even in simulation). I wonder if the presented ideas won't suffer from the same curse of dimensionality.\nIn terms of experiments, it is shown that the system is more effective than others but not so much *how* it achieves this efficiency. It would be good to show whether part of its efficiency comes from effective image-guided navigation: does a partial image match entail with targetted navigation (e.g. matches in the right side of the image make the robot turn right)?\nA couple more specific comments:\n- I think that dealing with multimodal distributions of actions with the forward consistency loss is effective for achieving the goal, but not necessarily good for modeling multimodality. Isn't it possible that the agent learns only one way of achieving such goal?\n - It is not clear how the authors achieve to avoid the problem of starting from scratch by \"pre-train the forward model and PSF separately by blocking gradient flow\". Isn't it still challenging to update them independently, given that at the beginning both components are probably not very accurate?\n\n\nConclusion:\nI think the paper presents an interesting idea which should be exposed to the community. The paper is easy to read and its experiments show the effectiveness of the method. The relevance of the method to achieve a deeper sense of learning and performing more complex tasks is however unclear to me.", "One of the main problems with imitation learning in general is the expense of expert demonstration. The authors here propose a method for sidestepping this issue by using the random exploration of an agent to learn generalizable skills which can then be applied without any specific pretraining on any new task. \n\nThe proposed method has at its core a method for learning a parametric skill function (PSF) that takes as input a description of the initial state, goal state, parameters of the skill and outputs a sequence of actions (could be of varying length) which take the agent from initial state to goal state.\n\nThe skill function uses a RNN as function approximator and minimizes the sum of two losses i.e. the state mismatch loss over the trajectory (using an explicitly learnt forward model) and the action mismatch loss (using a model-free action prediction module) . This is hard to do in practice due to jointly learning both the forward model as well as the state mismatches. So first they are separately learnt and then fine-tuned together. \n\nIn order to decide when to stop, an independent goal detector is trained which was found to be better than adding a 'goal-reached' action to the PSF.\n\nExperiments on two domains are presented. 1. Visual navigation where images of start and goal states are given as input. 2. Robotic knot-tying with a loose rope where visual input of the initial and final rope states are given as input.\n\nComments:\n\n- In the visual navigation task no numbers are presented on the comparison to slam-based techniques used as baselines although it is mentioned that it will be revisited.\n\n- In the rope knot-tying task no slam-based or other classical baselines are mentioned.\n\n- My main concern is that I am really trying to place this paper with respect to doing reinforcement learning first (either in simulation or in the real world itself, on-policy or off-policy) and then just using the learnt policy on test tasks. Or in other words why should we call this zero-shot imitation instead of simply reinforcement learnt policy being learnt and then used. The nice part of doing RL is that it provides ways of actively controlling the exploration. See this pretty relevant paper which attempts the same task and also claims to have the target state generalization ability. \n\nTarget-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning by Zhu et al.\n\nI am genuinely curious and would love the authors' comments on this. It should help make it clearer in the paper as well.\n \nUpdate:\n\nAfter evaluating the response from the authors and ensuing discussion as well as the other reviews and their corresponding discussion, I am revising my rating for this paper up. This will be an interesting paper to have at the conference and will spur more ideas and follow-on work.", "We thank the reviewers for their insightful and helpful feedback. We are glad that the reviewers found the general idea original and especially relevant (R2); the proposed approach well-founded and the experimental evaluations promising (R3). R3 says \"I am very happy to see experimental evaluations on real robots, and even in two different application domains.\" Both R2 and R3 recommend clear accepts for the paper. R1 asked for comparison to classical methods and a discussion on differences and similarities from pure reinforcement learning based approaches. In direct replies to individual reviewers, we report the performance of the requested baselines, explain the differences from a pure RL based approach, and address remaining questions.\n\nUpdate: \nWe thank R1 for taking the time to follow up on our comments with insightful discussion, and upgrading the review score for the paper to accept.", "Thanks for the clarifications. I have updated my review grade accordingly. ", "We thank you for following up on the discussion. The independent goal recognition network does not require any extra work concerning data or supervision. The data used to train the goal recognition network is the same as the data used to train the PSF. The only prior we are assuming is that nearby states to the randomly selected states are positive and far away are negative which is not domain specific. This prior provides supervision for obtaining positive and negative data points for training the goal classifier. Note that, no human supervision or any particular form of data is required in this self-supervised process. \n\nIn contrast, for rewards in RL, one would be required to train a separate classifier for \"each\" goal. Training each classifier will require manually annotated data. Therefore, training multiple classifiers will require substantial human supervision. Furthermore, these classifiers will be noisy, leading to noisy reward which will make the variance of policy gradients even higher. In our self-supervised setup, we do not require any human supervision and noisy learning of goal classifier is okay because we do not use the goal classifier to train the policy. \n\nHope this clarifies potential advantages of the proposed self-supervised learning approach in terms of sample efficiency and reward engineering over a pure RL based approach. We will refine the text to make it clearer in the final revision.", "I agree that pure RL can be pretty sample inefficient especially with raw visual input and multiple goals. I also agree that when using pure RL providing reward is generally difficult especially in such visual input cases ((a) in the response above). My question is that how much more/less work is it in general to train the independent goal recognition network via supervised learning?--\"In this work, we learn a robust goal recognition network by classifying transitions collected during exploration. We sample states at random, and for every sampled state make positives of its temporal neighbors, and make negatives of the remaining states more distant than a certain margin. We optimize our goal classifier by cross-entropy loss.\" The burden of providing reward in pure RL is now replaced with the supervision needed to train the goal classifier. Doesn't this step use expert supervision? \n\n", "\nR2: \"... forward consistency loss is effective for achieving the goal, but not necessarily good for modeling multimodality ... learns only one way of achieving such goal?\"\n=> Yes, you are correct. We do-not directly model the multimodal distribution of the action, but we address the instability issues of gradient based learning due to multimodality. In an attempt to match the multiple ground-truth targets for the same input, the predictions will oscillate, which in turn will make the gradient of the loss function with respect to neural network parameters also oscillate. The purpose of forward consistency loss is to mitigate gradient oscillation, i.e. -- it stabilizes the learning process by ensuring that network is not penalized for outputting a different action than ground truth as long as its predicted action has the same effect as ground truth one.\n\nLearning all possible ways of achieving a goal is slightly different question. In theory, it could be dealt by incorporating a stochastic sampling layer in the neural network in addition to forward consistency and is an interesting direction for future research. \n\nR2: \"... how the authors avoid the problem of starting from scratch by pre-train the forward model and PSF separately ... Isn't it still challenging\"\n=> Training the PSF through forward consistency loss is a challenging problem because the learning of inverse model PSF depends on how good the forward model is. This learned forward model will not be very accurate in the beginning of training, thus making the gradients noisy. Therefore, we first pretrained inverse and forward model independently until convergence, and then fine-tune them jointly with consistency loss. Empirically, we found that such a pre-training followed by joint fine-tuning to be more stable than joint fine-tuning from scratch.", "We thank the reviewer for the constructive feedback and are glad that the reviewer found the idea original and general direction as specially relevant and interesting. We address the concerns in detail below.\n\nR2: \"... learns a discover behavior that is better than random motion ... not clear whether this is achievable by the current system even with more training data.\"\n=> In the current setup of navigation, the exploration is random, and the robot learns the skills of avoiding walls, moving in free spaces and turning around to find the target image, etc. Note that we report performance in a setup when the robot is dropped in entirely new environments, showing that the learned skills generalize. In essence, what we have shown is that is possible to distill exploration data into generalizable skills that can be used to reach target images.\n\nThe complexity of the skills learned by our robot inevitably depends on the interaction data it collects via its exploration. We agree that random exploration is insufficient for learning more higher-level concepts. There are many works in the literature, such as pseudo-count (Bellemare et al, 2016) or learning progress (Schmidhuber, 1991) or curiosity (Pathak et al, 2017), that have proposed efficient exploration schemes. In the future, we plan to incorporate these exploration policies to collect data and train PSF (parameterized skill function) with the help of this data. The intuition is that with a good exploration policy, the kitchen sink and the oven will be closer to each other as compared to the toilet in robot’s roll outs. The PSF learned using this roll out data is therefore likely to learn these relationships implicitly. Our goal in this work is not to propose an exploration policy, but a mechanism that can make use of exploration data to learn a skill function to achieve desired goals. Our initial experiments in simulation suggest that performance of goal reaching improves when data is collected using a non-random exploration policy. \n\nR2: \"... how it can be extended to higher dimensional action spaces ... curse of dimensionality.\"\n=> This is a great point. In our opinion there are two mechanisms: (a) discovering a low-dimensional embedding of the action space (say using motor babbling) and controlling in this space; (b) a more general mechanism is to make use of structured exploration mechanisms that has a rich literature. Some works incentivize the agent to visit previously unseen states, e.g., pseudo-count (Bellemare et al, 2016), other incentivize the agent to take actions that lead to high-prediction error (Pathak et al, 2017) or measure learning progress (Schmidhuber, 1991). Ours proposed forward consistent way of learning PSF is agnostic to how exploration data is collected. A PSF learned over the data obtained using any of these structured exploration mechanisms has the potential to scale to high dimensional spaces. We acknowledge this issue in the Section-5 of paper and will make it clearer in the final revision.\n\nR2: \"... it is shown that the system is more effective than others but not so much *how* it achieves this efficiency ... partial image match\"\n=> There are two perspectives: (a) the quantitative view that can be used to systematically ablate the model to understand what components of the formulation are most critical and (b) a more qualitative and intuitive perspective, as suggested by you, that analyzes whether our robot learns to make a right/left turns more accurately when trained with forward consistency loss. \n\nFor (a), as an ablation, we trained the inverse model with the forward loss as a regularizer. In this forward regularizer setup, the forward model and inverse model are trained jointly and share the visual features. The difference from our proposed approach is that inverse model is trained with action prediction loss and does not receive gradients through the forward model. In case of knot tying task, the forward regularizer achieves 44% accuracy which is above the baseline (36%) but well below our proposed approach (60%). This result shows that the forward model is not merely acting as a regularizer, but optimizing the inverse model through the forward model is potentially critical to addressing the multi-modality issue. We will add these numbers in the final version of the paper.\n\nFor (b), when images have little overlap we found that classical solutions based on SIFT failed due to lack of keypoint matches. However, the inverse model and our approach are able to make left and right turns depending on whether the right or left part of the current image is visible in the goal image. We will include quantitative evaluation for this analysis in the final version of the paper.", "We are glad that the reviewer found the proposed approach well founded and the evaluations promising. We thank the reviewer for the constructive feedback and address the concerns in detail below.\n\nR3: \"... mention goal emulation and distal teacher learning in their related work\" \n=> We thank the reviewer for bringing this to our notice. We will include these in the related work and highlight the differences.\n\nR3: \"How well does the framework scale ... any intuition for what kind of features or information the networks are capturing?\"\n=> One interesting insight from the training process is that forward consistency loss improves the accuracy of inverse model even if the predicted image quality of forward model is not pixel accurate. One explanation is that the forward model is not required to be pixel-accurate till the time gradients are regularized, which leads to better performance. \n\nRegarding what information is being captured -- intuitively inverse models only represent information that is required to predict the action. We will include nearest neighbor visualization in the final revision of the paper to provide further insights into the nature of the learned representations. \n\nAs far as complex manipulation tasks are concerned, we have shown results for manipulating a rope into ‘S’ shape and tying into a knot. For more complex tasks and scaling to high dimensional spaces, we believe that instead of random exploration, more structured intrinsic motivation driven exploration, e.g. pseudo-count (Bellemare et al, 2016) or learning progress (Schmidhuber, 1991) or curiosity (Pathak et al, 2017) will play key role. We discuss this briefly in the Section-5 of the paper and will elaborate in more detail in the next revision.\n\nR3: \"... authors should consider a test where the robot remains stationary with a fixed goal, but obstacles are move around ... discussing the effects ... on the performance\"\n=> Thank you for the suggestion. This is an interesting experiment and we will look into it. In our preliminary experiments, we found that the turtlebot is robust to dynamic obstacles as long as goal image does not change significantly.\n\nR3: \"... the performance change with the number of frames between checkpoints?\"\n=> Generally speaking, as the number of frames between checkpoints increases, the performance deteriorates, but quite gracefully. To quantify this effect, we conducted an experiment where in the navigation setup the robot was only shown the goal image (~approximately 15-25 steps away from the current image on an optimal route). In this scenario, our robot exhibited goal searching behavior until the goal comes in field of view, followed by goal directed navigation. Our method significantly outperformed other baselines as reported in Table-1.\n\nR3: \"... it seems like the robot could get a slightly better view if it took another couple of steps ... useful to indicate when the goal is detected in video\"\n=> Yes, it is indeed the artifact of goal recognizer training because it was trained with some stochasticity around the arbitrarily selected states from collected exploration data. \n\nThanks for the suggestions on making the result videos more comprehensible on how to analyze goal recognizer stop model. We will include qualitative analysis of the goal recognizer stop model in the supplementary materials of our final revision.", "We thank you for the constructive feedback and address the concerns in detail below.\n\nR1: \"In visual navigation ... no numbers for slam-based techniques ... will be revisited.\"\n=> When the imitator is shown a dense demonstration sequence (i.e., every frame), it is possible to use SIFT features to estimate the odometry and guide the robot. However, in the more interesting scenario of the imitator being shown a sparser demonstration (i.e., every 5th frame), SIFT matching fails. The major reason for failure is the wide baseline, which on many occasions leads to little overlap between the current image and the next image in the demonstration. We will add SIFT numbers to the final revision.\n\nWe also tried state-of-the-art open source methods: OpenSFM and ORB SLAM2. These methods were unable to generate the map with every 5th image of a demonstration sequence. The other possibility was to let the robot explore randomly and build a map. However, with random exploration the robot tends to wander off and is not focussed on constructing a map of the part of the environment from where the demonstration sequences were taken. This leads to failure in following the demonstration. \n\nR1: \"In rope knot-tying task, no other classical baselines are mentioned\"\n=> Thanks for bringing this point up. The analog of SLAM in rope manipulation is to perform non-linear alignment of rope between the current and target image and use this alignment to select the action. TPS-RPM is a well-known method to align deformable objects such as ropes and is described in Section-4.2. TPS-RPM based method was compared in Nair et al. in which their inverse model outperformed this classical baseline by a significant margin. Under the same setup and data provided by Nair et al., our forward consistency based imitator significantly outperforms the model proposed in Nair et al. This directly implies that our method outperforms this classical baseline. \n\nR1: \"... place this paper with respect to doing reinforcement learning first ... controlling the exploration.\"\n=> This is a very relevant question. Our overarching aim is to enable robotic systems to perform complex new tasks from raw visual inputs. Instead of training our robot to perform only one task, we would like to provide the goal task description (i.e., an image depicting the goal) as input to the robot’s policy.\n\nThe most general way would be to learn a policy (say using reinforcement learning) that takes as input the images showing the current and goal state and outputs a sequence of actions to reach the goal. There are some major concerns with using RL in the real world: (a) measuring rewards is non-trivial. For e.g., in order to train the robot to configure the rope in shape S or knot, one would need to build classifiers that detect these shapes and use the binary output of classifier as the reward. However, these classifiers will inevitably be imperfect, which in turn would lead to noisy reward. More critically, in order for the system to generalize to novel goals, one would have to train the system with many goals. This implies that large amounts of human supervision would be required to train these classifiers. (b) RL typically requires ~10-100 million interactions to learn from visual inputs for all but the simplest of tasks, which is simply infeasible in the real world.\n\nIn our paper, we propose to learn such a policy using supervised learning. The agent explores its environment and generates interaction data in the form of pair of states and sequence of actions the agent executed. This action sequence provides supervision to learn the policy. One major issue in training such a model through supervised learning is that multiple actions can help the agent transition from current to goal state. We resolve this issue by proposing the forward consistency loss. Our method is sample efficient (~60K interactions for rope manipulation, ~200K for navigation) and does not rely on environmental rewards. \n\nThe number of samples required to learn such a policy grows with the number of actions needed to reach goal. To perform complex tasks, we trade off this difficulty by using subgoals in the form of a sequence of images provided by an expert. In other words, we learn a low-level policy (i.e. PSF) which is accurate for predicting actions when current and goal state are not that far apart, and the expert demonstration provides a high-level plan to goto far away goal states. \n\nR1: \"pretty relevant paper which attempts the same task ... by Zhu et al.\"\n=> This paper uses RL using multiple goals. One major issue with RL based methods in real world is their bad sample efficiency. Adding multiple goals to the same policy usually hurts the sample efficiency even more, making it generally impractical to train on real robots. For e.g., the above paper trained using RL for 100M samples in simulation for 10-20 steps away goals. However, this is a relevant citation and we will include it in the final revision." ]
[ 8, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkisuzWRW", "iclr_2018_BkisuzWRW", "iclr_2018_BkisuzWRW", "iclr_2018_BkisuzWRW", "HJL-IlS7z", "HJRuTbN7f", "SkJHO4e7G", "ryXU04emf", "rJ4XrD5eG", "SylSZ-5gf", "BJdG429gz" ]
iclr_2018_rkRwGg-0Z
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done.
accepted-oral-papers
Very solid paper exploring an interpretation of LSTMs. good reviewss
train
[ "HkG9aTIVf", "r1k_ETYlM", "SJ9ufS8Ef", "rJHhMjFgG", "B1qEe3txM", "rJZhYjI7f", "BJZscx4mz", "HJCfJlszf", "rJNARkjGz", "BykpAJiMG", "H1fcTJsGf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks for engaging in a helpful discussion!", "This article aims at understanding the role played by the different words in a sentence, taking into account their order in the sentence. In sentiment analysis for instance, this capacity is critical to model properly negation.\nAs state-of-the-art approaches rely on LSTM, the authors want to understand which information comes from which gate. After a short remainder regarding LSTM, the authors propose a framework to disambiguate interactions between gates. In order to obtain an analytic formulation of the decomposition, the authors propose to linearize activation functions in the network.\nIn the experiment section, authors compare themselves to a standard logistic regression (based on a bag of words representation). They also check the unigram sentiment scores (without context).\nThe main issue consists in modeling the dynamics inside a sentence (when a negation or a 'used to be' reverses the sentiment). The proposed approach works fine on selected samples.\n\n\nThe related work section is entirely focused on deep learning while the experiment section is dedicated to sentiment analysis. This section should be rebalanced. Even if the authors claim that their approach is general, they also show that it fits well the sentiment analysis task in particular.\n\nOn top of that, a lot of fine-grained sentiment analysis tools has been developed outside deep-learning: the authors should refer to those works.\n\nFinally, authors should provide some quantitative analysis on sentiment classification: a lot of standard benchmarks are widely use in the literature and we need to see how the proposed method performs with respect to the state-of-the-art.\n\n\nGiven the chosen tasks, this work should be compared to the beermind system:\nhttp://deepx.ucsd.edu/#/home/beermind\nand the associated publication\nhttp://arxiv.org/pdf/1511.03683.pdf", "Explanations are convincing, I revise my rating.", "In this paper, the authors propose a new LSTM variant that allows to produce interpretations by capturing the contributions of the words to the final prediction and the way their learned representations are combined in order to yield that prediction. They propose a new approach that they call Contextual Decomposition (CD). Their approach consists of disambiguating interaction between LSTM’s gates where gates are linearized so the products between them is over linear sums of contributions from different factors. The hidden and cell states are also decomposed in terms of contributions to the “phrase” in question and contributions from elements outside of the phrase. The motivation of the proposed decomposition using LSTMs is that the latter are powerful at capturing complex non-linear interactions, so, it would be useful to observe how these interactions are handled and to interpret the LSTM’s predictions. As the authors intention is to build a way of interpreting LSTMs output and not to increase the model’s accuracy, the empirical results illustrate the ability of their decomposition of giving a plausible interpretation to the elements of a sentence. They compare their method with different existing method by illustrating samples from the Yelp Polarity and SST datasets. They also show the ability of separating the distribution of CD scores related to positive and negative phrases on respectively Yelp Polarity and SST.\n\nThe proposed approach is potentially of great benefit as it is simple and elegant and could lead to new methods in the same direction of research. The sample illustrations, the scatter plots and the CD score distributions are helpful to asses the benefit of the proposed approach.\n\nThe writing could be improved as it contains parts where it leads to confusion. The details related to the linearization (section 3.2.2), the training (4.1) could be improved. In equation 25, it is not clear what π_{i}^(-1) and x_{π_{i}} represent but the example in equation 26 makes it clearer. The section would be clearer if each index and notation used is explained explicitly.\n\n(CD in known for Contrastive Divergence in the deep learning community. It would be better if Contextual Decomposition is not referred by CD.)\n\nTraining details are given in section 4.1 where the authors mention the integrated gradient baseline without mentioning the reference paper to it (however they do mention the reference paper at each of table 1 and 2). it would be clearer for the reader if the references are also mentioned in section 4.1 where integrated gradient is introduced. Along with the reference, a brief description of that baseline could be given. \n\nThe “Leave one out” baseline is never mentioned in text before section 4.4 (and tables 1 and 2). Neither the reference nor the description of the baseline are given. It would have been clearer to the reader if this had been the case.\n\nOverall, the paper contribution is of great benefit. The quality of the paper could be improved if the above explanations and details are given explicitly in the text.", "The authors address the problem of making LSTMs more interpretable via the contextual decomposition of the state vectors. By linearizing the updates in the recurrent network, the proposed scheme allows one to extract word importance information directly from the gating dynamics and infer word-to-word interactions. \n\nThe problem of making neural networks more understandable is important in general. For NLP, this relates to the ability of capturing phrase features that go beyond single-word importance scores. A nice contribution of the paper is to show that this can highly improve classification performance on the task of sentiment analysis. However, the author could have spent some more time in explaining the technical consequences of the proposed linear approximation. For example, why is the linear approximation always good? And what is the performance loss compared to a fully nonlinear network? \n\nThe experimental results suggest that the algorithm can outperform state-of-the-art methods on various tasks.\n\nSome questions:\n- is any of the existing algorithms used for the comparison supposed to capture interactions between words and phrases? If not, why is the proposed algorithm compared to them on interaction related tasks?\n- why the output of the algorithms is compared with the logistic regression score? May the fact that logistic regression is a linear model be linked to the good performance of the proposed method? Would it be possible to compare the output of the algorithms with human given scores on a small subset of words? \n- the recent success of LSTMs is often associated with their ability to learn complex and non-linear relationships but the proposed method is based on the linearization of the network. How can the algorithm be able to capture non-linear interactions? What is the difference between the proposed model and a simple linear model? ", "Thanks for your thoughtful response.\n\n\"Regarding the chosen task, my opinion is more lukewarm: results obtained by the other methods are completly awful. They are so far from the ground truth that it is difficult to consider them as baselines (in fact, on the proposed exemples, baselines give results at the opposite from what is expected).\"\n\nWe agree with you that it is surprising how poorly prior methods perform, and also that CD provides significant improvements over multiple state of the art baselines. It is worth noting that work in interpreting predictions made by LSTMs is still very young. To the best of our knowledge, the first paper in this area, was presented one year ago at ICLR 2017 [1]. Moreover, concurrent ICLR 2018 submissions [2][3] have also presented evidence that existing interpretation methods for neural networks can perform quite poorly. This is to say, the problem of interpreting neural networks in general, and LSTMs in particular, is far from solved. Hopefully this makes the shortcomings of existing work slightly less surprising. We believe that we have provided strong evidence that our method has made substantial progress in an area where it is clearly needed. \n\n\"In that sense, I still would like to compare CD interpretation with real method adapted to fine grained sentiment classification.\nAs the author propose a quantitative analysis regarding their approach, we would like to compare those figures with the state of the art.\"\n\nThe problem we are trying to solve is that of producing explanations for individual predictions made by an LSTM. To the best of our knowledge, the methods we compare against are state of the art for solving this problem. If you know of any additional methods for solving this problem, we would appreciate if you could point them out to us. \n\nMoreover, this work serves as the first paper to extract interactions from LSTMs (as presented in 4.5), an important task for which there is no prior work. While we use leave one out as a baseline, it was never claimed to perform this task, and struggles accordingly. Again, we would appreciate pointers to any additional methods.\n\nAs a side note, by \"real method\", we assume you mean a method which is state of the art for predictive accuracy. It is worth noting that, for Stanford Sentiment Treebank, which is still actively published on, the state of the art is dominated by deep learning, largely LSTMs with various modifications. For instance, here's a recent paper from openAI [4] (see figure 2 on page 3), co-authored by Ilya Sutskever.\n\n[1] https://arxiv.org/abs/1702.02540 - WJ Murdoch, A Szlam, Automatic Rule Extraction from Long Short Term Memory Networks\n[2] https://openreview.net/forum?id=H1xJjlbAZ - anon., Interpretation of neural networks is fragile\n[3] https://openreview.net/forum?id=r1Oen--RW - anon., The (un)reliability of salience methods\n[4] https://arxiv.org/pdf/1704.01444.pdf - A Radford, R Jozefowicz, I Sutskever, Learning to Generate Reviews and Discovering Sentiment", "The authors focus on the interpretation of an existing LSTM. They use the sentiment classification task to illustrate the behavior of their approach. The work done on LSTM is interesting, explaining how the latent representation at t has been built from x_t and h_{t-1}. \nRegarding the chosen task, my opinion is more lukewarm: results obtained by the other methods are completly awful. They are so far from the ground truth that it is difficult to consider them as baselines (in fact, on the proposed exemples, baselines give results at the opposite from what is expected). In that sense, I still would like to compare CD interpretation with real method adapted to fine grained sentiment classification.\nAs the author propose a quantitative analysis regarding their approach, we would like to compare those figures with the state of the art.\n", "Thanks for the detailed and thoughtful review. We've responded to some of your comments below.\n\n\"In this paper, the authors propose a new LSTM variant that allows to produce interpretations by capturing the contributions of the words to the final prediction and the way their learned representations are combined in order to yield that prediction. \"\nTo clarify, we are not proposing a new neural architecture. Our new method, contextual decomposition, is an interpretation method for a standard LSTM. Given a trained LSTM, it can be applied without altering the underlying model, re-training, or any additional work. We think that this is more impactful than proposing a new architecture, as it doesn't force users to alter their model to get interpretations, nor to sacrifice the LSTM's predictive accuracy.\n\nThis was a common misconception across reviewers, so we updated our abstract (lines 4-6), introduction (paragraph 2, lines 1-3) and conclusion (lines 1-2) to clarify this distinction.\n\n\"The writing could be improved as it contains parts where it leads to confusion. The details related to the linearization (section 3.2.2), the training (4.1) could be improved. In equation 25, it is not clear what π_{i}^(-1) and x_{π_{i}} represent but the example in equation 26 makes it clearer. The section would be clearer if each index and notation used is explained explicitly.\"\nThanks for pointing out these areas for improvement. We've expanded 3.2.2 to make it clearer and better motivate our notation, adding equation 26 and re-writing the subsequent paragraph. The x_{\\pi_{i}} you note was actually a typo - it has been corrected to y_{\\pi_{i}}. \n\nFor 4.1, we split off the baseline portion of 4.1 into 4.1.3, which should make it clearer, while also addressing some of your concerns below around introducing, citing and describing baselines.\n\n\"Training details are given in section 4.1 where the authors mention the integrated gradient baseline without mentioning the reference paper to it (however they do mention the reference paper at each of table 1 and 2). it would be clearer for the reader if the references are also mentioned in section 4.1 where integrated gradient is introduced. Along with the reference, a brief description of that baseline could be given. \"\nIntegrated gradients is now properly referenced. As discussed above, we added 4.1.3, which includes proper references, and refers the reader to the related work section for a description (We felt that reproducing descriptions of baselines in 4.1.3 would basically require replicating the related work paragraph).\n\n\"The “Leave one out” baseline is never mentioned in text before section 4.4 (and tables 1 and 2). Neither the reference nor the description of the baseline are given. It would have been clearer to the reader if this had been the case.\"\nThanks for pointing this out. We added a mention and citation of leave one out in the new baselines section, 4.1.3. Although the paper and method were discussed in related work (paragraph 1, lines 5-7), we added a reference to the name “leave one out” for clarity. ", "\"- the recent success of LSTMs is often associated with their ability to learn complex and non-linear relationships but the proposed method is based on the linearization of the network. How can the algorithm be able to capture non-linear interactions? What is the difference between the proposed model and a simple linear model? \"\nWe hope that our earlier comments resolve this question. In particular, our proposed method is not a separate prediction method, but rather an interpretation method for a standard LSTM. In response to this confusion, we have updated our abstract (lines 4-6), introduction (paragraph 2, lines 1-3), conclusion (lines 1-2), and added equation 10.\n", "Thank you for your helpful comments. \n\"A nice contribution of the paper is to show that this can highly improve classification performance on the task of sentiment analysis. \"\nWe actually do something different than what this implies. CD is an algorithm for producing interpretations of LSTM predictions. In particular, given a trained LSTM, it produces interpretations for individual predictions without modifying the LSTM's architecture in any way, leaving the predictions, and accuracy, unchanged. The purpose of our evaluation is not to improve predictive performance but rather to demonstrate that these importance scores accurately reflect the LSTMs dynamics (e.g. negation, compositionality) and, in particular, do so better than prior methods. We have updated our abstract (lines 4-6), introduction (paragraph 2, lines 1-3), conclusion (lines 1-2), and added equation 10 in order to avoid similar misunderstandings by future readers.\n\"However, the author could have spent some more time in explaining the technical consequences of the proposed linear approximation. For example, why is the linear approximation always good? And what is the performance loss compared to a fully nonlinear network? \"\nOur linearization is not an approximation, it is exact. CD produces an exact decomposition of the values fed into the LSTM’s final softmax into a sum of two terms: contributions resulting solely from the specified phrase, and others. Mathematically, this is shown in the newly added equation 10. Moreover, CD is used for interpretation of the original LSTM, not as a separate prediction algorithm, so that there is no performance loss.\n\"- is any of the existing algorithms used for the comparison supposed to capture interactions between words and phrases? If not, why is the proposed algorithm compared to them on interaction related tasks?\"\nThis is a great question, we assume you’re referencing the finding in 4.5 that CD can extract negations. To the best of our knowledge, no prior algorithm has made the claim of being able to extract interactions from LSTMs. Our ability to do so is a significant contribution. Although not previously discussed, the leave one out method can be adapted to produce an interaction value, which we report in figure 1. However, the produced interactions don't seem to contain much information, perhaps explaining why they were not included in the original paper. Nonetheless, leave one out is the only baseline we are aware of, so we thought it important to report them for comparison.\n\"why the output of the algorithms is compared with the logistic regression score? May the fact that logistic regression is a linear model be linked to the good performance of the proposed method?\"\nI assume you're referencing 4.2 here. When the underlying model is sufficiently accurate (which it is in our case), logistic regression coefficients are generally viewed to provide qualitatively sensible importance scores. In other words, the ordering provided by the coefficients generally lines up very well with what humans qualitatively view as important. Thus, a sensible check for the behaviour of an interpretation algorithm is whether or not it can recover qualitatively similar coefficients, as measured by correlation.\n\nTo elaborate, if a logistic regression coefficient is very positive, such as for “terrific”, we would expect the word importance score extracted from an LSTM to also be very positive. Similarly, if the logistic regression coefficient is zero, such as for “the”, we would expect the LSTM's word importance to be quite small. We do not expect these relationships to be perfect, but the fact that they are reasonably strong supports our claim that our method produces comparable or superior word importance scores.\n\n“Would it be possible to compare the output of the algorithms with human given scores on a small subset of words?”\nThe problem of running human experiments to validate interpretations is an interesting and active research area. However, running such experiments is a substantive endeavour, which unfortunately puts it outside the scope of this paper. We do agree that it would be an exciting prospect for future work, though. Nonetheless, as discussed above, we do believe that our approach provides valuable information, if not as much as a full human experiment.", "Thank you for your helpful comments. As you'll see below, they lead to a meaningful improvement in the framing of our paper.\n\nThe problem we are solving is not extracting interactions, in a general sense, from text data, nor is it predicting sentiment. The problem we are solving is, for a given, trained, LSTM, explaining individual predictions being made, without modifying the architecture. This is an important distinction, which informs what denotes related work, and what methods we compare against. It is also one that was not entirely clear in the original version, and we so we updated our abstract (lines 4-6), introduction (paragraph 2, lines 1-3), conclusion (lines 1-2) and added equation 10 to better express this. When framed in this way, we believe that our baselines are the correct ones to demonstrate the novelty of our results.\n\n\"In the experiment section, authors compare themselves to a standard logistic regression (based on a bag of words representation). They also check the unigram sentiment scores (without context).\nThe main issue consists in modeling the dynamics inside a sentence (when a negation or a 'used to be' reverses the sentiment). The proposed approach works fine on selected samples.\"\n\nTo clarify, only section 4.2 compares against a logistic regression, and deals with solely unigram sentiment scores. Sections 4.3-4.6 do not involve logistic regression, and deal with general n-grams and interactions.\n\nIt is worth noting that we were very careful to not rely on \"selected samples\", i.e. cherry-picking, as our primary means of validation. Rather, we provide anecdotes to motivate searches across the full dataset for different types of compositionality, with each of 4.3, 4.4 and 4.5 involving different criteria, such as negation. For each of these different instances, we ultimately base our conclusions on the distributions of importance scores extracted from our LSTM across all phrases/reviews containing each kind of compositionality. These distributions can be found in figures 1-4 and provide, in our opinion, a more compelling case.\n\n\"The related work section is entirely focused on deep learning while the experiment section is dedicated to sentiment analysis. This section should be rebalanced. Even if the authors claim that their approach is general, they also show that it fits well the sentiment analysis task in particular.\"\n\nThe primary contribution of this paper is an algorithm for interpreting predictions made by LSTMs, not improved prediction performance on sentiment analysis. Consequently, the related work section focuses on prior work in interpreting deep learning algorithms, particularly LSTMs. In our experiment section, we fit a single LSTM per dataset and analyse the behaviour of the LSTM interpretations produced by CD, along with four interpretation baselines, in different settings. The LSTMs are fit using standard procedures, and we make no claims of state of the art predictive performance from our model.\n\n\"Finally, authors should provide some quantitative analysis on sentiment classification: a lot of standard benchmarks are widely use in the literature and we need to see how the proposed method performs with respect to the state-of-the-art.\"\n\nTo be clear, we assume that when you refer to benchmarks, and ask for performance with respect to state-of-the-art, you are referring to predictive accuracy. We do not claim to be state of the art in terms of predictive accuracy. In fact, as we note in 4.1.1 and 4.1.2, our models follow implementations of baselines used for predictive accuracy in prior papers. \n\nRather, what we do claim is state of the art for interpreting predictions made by an LSTM. To justify this claim, we compare against four LSTM interpretation benchmarks across four different evaluation settings. Given the focus of our paper, we thought these were the most relevant comparisons.\n\nAs we mentioned above, we've updated our abstract (lines 4-6), introduction (paragraph 2, lines 1-3), conclusion (lines 1-2) to clarify this distinction.\n\n\"Given the chosen tasks, this work should be compared to the beermind system: http://deepx.ucsd.edu/#/home/beermind and the associated publication http://arxiv.org/pdf/1511.03683.pdf\"\n\nThanks for the pointer, this paper was a very interesting read. It seems that the focus is primarily on generating reviews for a given user/item pair, and secondarily on predicting sentiment. Given that our paper is focused on interpreting LSTMs, not on generating reviews or predictive performance for sentiment analysis, we are unsure what a meaningful, relevant comparison would look like." ]
[ -1, 7, -1, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 4, 2, -1, -1, -1, -1, -1, -1 ]
[ "SJ9ufS8Ef", "iclr_2018_rkRwGg-0Z", "rJZhYjI7f", "iclr_2018_rkRwGg-0Z", "iclr_2018_rkRwGg-0Z", "BJZscx4mz", "H1fcTJsGf", "rJHhMjFgG", "BykpAJiMG", "B1qEe3txM", "r1k_ETYlM" ]
iclr_2018_Hy7fDog0b
AmbientGAN: Generative models from lossy measurements
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.
accepted-oral-papers
All three reviewers were positive about the paper, finding it to be on an interesting topic and with broad applicability. The results were compelling and thus the paper is accepted.
train
[ "Bkpju_8VG", "BJAJzV4xz", "B1oKXx9gG", "Hyxt2gCxz", "SkElzP3mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "After reading the other reviews and responses, I retain a favorable opinion of the paper. The additional experiments are especially appreciated.", "Quick summary:\nThis paper shows how to train a GAN in the case where the dataset is corrupted by some measurement noise process. They propose to introduce the noise process into the generation pipeline such that the GAN generates a clean image, corrupts its own output and feeds that into the discriminator. The discriminator then needs to decide whether this is a real corrupted measurement or a generated one. The method is demonstrated to the generate better results than the baseline on a variety of datasets and noise processes.\n\nQuality:\nI found this to be a nice paper - it has an important setting to begin with and the proposed method is clean and elegant albeit a bit simple. \n\nOriginality:\nI'm pretty sure this is the first paper to tackle this problem directly in general.\n\nSignificance:\nThis is an important research direction as it is not uncommon to get noisy measurements in the real world under different circumstances. \n\nPros:\n* Important problem\n* elegant and simple solution\n* nice results and decent experiments (but see below)\n\nCons:\n* The assumption that the measurement process *and* parameters are known is quite a strong one. Though it is quite common in the literature to assume this, it would have been interesting to see if there's a way to handle the case where it is unknown (either the process, parameters or both).\n* The baseline experiments are a bit limited - it's clear that such baselines would never produce samples which are any better than the \"fixed\" version which is fed into them. I can't however, think of other baselines other than \"ignore\" so I guess that is acceptable.\n* I wish the authors would show that they get a *useful* model eventually - for example, can this be used to denoise other images from the dataset?\n\nSummary:\nThis is a nice paper which deals with an important problem, has some nice results and while not groundbreaking, certainly merits a publication.", "The paper explores GAN training under a linear measurement model in which one assumes that the underlying state vector $x$ is not directly observed but we do have access to measurements $y$ under a linear measurement model plus noise. The paper explores in detail several practically useful versions of the linear measurement model, such as blurring, linear projection, masking etc. and establishes identifiability conditions/theorems for the underlying models.\nThe AmbientGAN approach advocated in the paper amounts to learning end-to-end differentiable Generator/Discriminator networks that operate in the measurement space. The experimental results in the paper show that this works much better than reasonable baselines, such as trying to invert the measurement model for each individual training sample, followed by standard GAN training.\nThe theoretical analysis is satisfactory. However, it would be great if the theoretical results in the paper were able to associate the difficulty of the inversion process with the difficulty of AmbientGAN training. For example, if the condition number for the linear measurement model is high, one would expect that recovering the target real distribution is more difficult. The condition in Theorem 5.4 is a step in this direction, showing that the required number of samples for correct recovery increases with the probability of missing data. It would be great if Theorems 5.2 and 5.3 also came with similar quantitative bounds.", "The paper proposes an approach to train generators within a GAN framework, in the setting where one has access only to degraded / imperfect measurements of real samples, rather than the samples themselves. Broadly, the approach is to have a generator produce the \"full\" real data, pass it through a simulated model of the measurement process, and then train the discriminator to distinguish between these simulated measurements of generated samples, and true measurements of real samples. By this mechanism, the proposed method is able to train GANs to generate high-quality samples from only imperfect measurements.\n\nThe paper is largely well-written and well-motivated, the overall setup is interesting (I find the authors' practical use cases convincing---where one only has access to imperfect data in the first place), and the empirical results are convincing. The theoretical proofs do make strong assumptions (in particular, the fact that the true distribution must be uniquely constrained by its marginal along the measurement). However, in most theoretical analysis of GANs and neural networks in general, I view proofs as a means of gaining intuition rather than being strong guarantees---and to that end, I found the analysis in this paper to be informative.\n\nI would make a suggestions for possible further experimental analysis: it would be nice to see how robust the approach is to systematic mismatches between the true and modeled measurement functions (for instance, slight differences in the blur kernels, noise variance, etc.). Especially in the kind of settings the paper considers, I imagine it may sometimes also be hard to accurately model the measurement function of a device (or it may be necessary to use a computationally cheaper approximation for training). I think a study of how such mismatches affect the training procedure would be instructive (perhaps more so than some of the quantitative evaluation given that they at best only approximately measure sample quality).", "We appreciate the reviewers’ helpful comments.\n\nReviewer1 and Reviewer2 both suggest further experimental analysis to evaluate the robustness of our approach to systematic mismatches between the true and modeled measurement functions. This is a great idea and towards this, we have performed the following experiment:\n\nWe consider the observed measurements in the block pixels model with the probability of blocking pixels (p*) = 0.5. We then attempt to use the AmbientGAN setup to learn a generative model without any knowledge of p*. We try several different values of p for the simulated measurements and plot inception score vs the assumed dropout probability p. Please see the plot in Appendix D of the updated pdf.\n\nWe observe that the inception score peaks at the true value and gradually drops on both sides. This suggests that using p only approximately equal to p* still yields a good generative model, indicating that the AmbientGAN setup is robust to systematic mismatches between the true and modeled measurement functions. It would be interesting to analyze the robustness properties further, both empirically and theoretically. \n\nReviewer2's comment also suggests attempting to estimate the parameters of the measurement function. This seems important in practical settings and we thank the reviewer for pointing this out. Going even further, one can also attempt to estimate the measurement function including its function form. We remark that distributional assumptions are necessary for any such procedure and it would be interesting to construct and analyze estimators under various settings. For instance, if we know that zero pixels are rare (e.g., the celebA dataset), then we can easily estimate the dropout probability by counting the number of zero pixels in the measurements. Further, since one cannot expect the estimation to be perfect, robustness, as alluded to above, is necessary. We are keen to explore these ideas further.\n\nTo answer Reviewer2’s question about getting a useful model, we attempted to use the GAN learned using our procedure for compressed sensing. Generative models have been shown to improve sensing over sparsity-based approaches (https://arxiv.org/abs/1703.03208). Through the following experiment, we show that a similar improvement is obtained using the GANs learned through the AmbientGAN approach.\n\nWe train an AmbientGAN with block pixels measurements on MNIST with p = 0.5. Using the learned generator, we follow the rest of the procedure in (https://arxiv.org/abs/1703.03208). Using their code (available at https://github.com/AshishBora/csgm) we can plot the reconstruction error vs the number of measurements, comparing Lasso with AmbientGAN. Please see the plot in Appendix D of the updated pdf; we find that the AmbientGAN model gives significant improvements for a wide range of measurements." ]
[ -1, 7, 7, 8, -1 ]
[ -1, 4, 4, 4, -1 ]
[ "Hyxt2gCxz", "iclr_2018_Hy7fDog0b", "iclr_2018_Hy7fDog0b", "iclr_2018_Hy7fDog0b", "iclr_2018_Hy7fDog0b" ]
iclr_2018_rJWechg0Z
Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation
In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.
accepted-poster-papers
This paper presents a nice approach to domain adaptation that improves empirically upon previous work, while also simplifying tuning and learning.
train
[ "HJGANV2Ez", "BkiyM2dgG", "r15hYW5gM", "SkjcYkCgf", "SkhhtBGXG", "SJJSYrGmG", "rkcg_HfXf", "HkkI8HMQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The rebuttal addresses most of my questions. Here are two more cents. \n\nThe theorem still does not favor the correlation alignment over the geodesic alignment. What Figure 2 shows is an empirical observation but the theorem itself does not lead to the result.\n\nI still do not think the cross-modality setup is appropriate for studying domain adaptation. That would result in disparate supports to the distributions of the two domains. In general, it is hard to adapt between two such \"domains\" though the additional pairwise relation between the data points of the two \"domains\" could help. Moreover, there has been a rich literature on multi-modality data. It is not a good idea to term it with a new name and meanwhile ignore the existing works on multi-modalities. \n\n", "This paper improves the correlation alignment approach to domain adaptation from two aspects. One is to replace the Euclidean distance by the geodesic Log-Euclidean distance between two covariance matrices. The other is to automatically select the balancing cost by the entropy on the target domain. Experiments are conducted from SVHN to MNIST and from SYN MNIST to SVHN. Additional experiments on cross-modality recognition are reported from RGB to depth.\n\nStrengths:\n+ It is a sensible idea to improve the Euclidean distance by the geodesic Log-Euclidean distance to better explore the manifold structure of the PSD matrices. \n+ It is also interesting to choose the balancing cost using the entropy on the target. However, this point is worth further exploring (please see below for more detailed comments).\n+ The experiments show that the geodesic correlation alignment outperforms the original alignment method. \n\nWeaknesses: \n- It is certainly interesting to have a scheme to automatically choose the hyper-parameters in unsupervised domain adaptation, and the entropy over the target seems like a reasonable choice. This point is worth further exploring for the following reasons. \n1. The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac’s delta function as the predictions over the target, etc.\n2. The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\lamba for geodesic alignment but not for the Euclidean alignment.\n3. The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e..g, learning rate, early stopping) for the geodesic alignment? \n4. If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\lambda differ from that by the entropy? \n\n- The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated. \n\nQuestion:\n- I am not sure how the gradients were computed after the eigendecomposition in equation (8).\n\n\nI like the idea of automatically choosing free parameters using the entropy over the target domain. However, instead of justifying this point by the theorem that relies on many assumptions, it is better to further test it using experiments (e.g., on Office31 and for other adaptation methods). The geodesic correlation alignment is a reasonable improvement over the Euclidean alignment.\n", "Summary:\nThis paper proposes minimal-entropy correlation alignment, an unsupervised domain adaptation algorithm which links together two prior class of methods: entropy minimization and correlation alignment. Interesting new idea. Make a simple change in the distance function and now can perform adaptation which aligns with minimal entropy on target domain and thus can allow for removal of hyperparameter (or automatic validation of correct one).\n\nStrengths\n- The paper is clearly written and effectively makes a simple claim that geodesic distance minimization is better aligned to final performance than euclidean distance minimization between source and target. \n- Figures 1 and 2 (right side) are particularly useful for fast understanding of the concept and main result.\n\n\nQuestions/Concerns:\n- Can entropy minimization on target be used with other methods for DA param tuning? Does it require that the model was trained to minimize the geodesic correlation distance between source and target?\n- It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL?\n- Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach?\n\n\t\t\t\t\t\n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\n\t\t\t\t\t\n[2] Raghuraman Gopalan and Ruonan Li. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. \n", "The authors propose a novel deep learning approach which leverages on our finding that entropy minimization\nis induced by the optimal alignment of second order statistics between source and target domains. Instead of relying on Euclidean distances when performing the alignment, the authors use geodesic distances which preserve the geometry of the manifolds. Among others, the authors also propose a handy way to cross-validate the model parameters on target data using the entropy criterion. The experimental validation is performed on benchmark datasets for image classification. Comparisons with the state-of-the-art approaches show that the proposed marginally improves the results. The paper is well written and easy to understand.\n\nAs a main difference from DeepCORAL method, this approach relies on the use of geodesic distances when doing the alignment of the distribution statistics, which turns out to be beneficial for improving the network performance on the target tasks. While I don't see this as substantial contribution to the field, I think that using the notion of geodesic distance in this context is novel. The experiments show the benefit over the Euclidean distance when applied to the datasets used in the paper. \n\nA lot of emphasis in the paper is put on the methodology part. The experiments could have been done more extensively, by also providing some visual examples of the aligned distributions and image features. This would allow the readers to further understand why the proposed alignment approach performs better than e.g. Deep Coral.", "W 5. - The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated. \n\nIn domain adaptation, the equivalence between domain and dataset is not automatic and some works have been operating in the direction of discovering domains as a subpart of a dataset (e.g., Gong et al. Reshaping Visual Datasets for Domain Adaptation - NIPS 2013). In this respect, the NYU dataset can be used to quantify adaptation across different sensor modalities within the same dataset.\nThe NYU experiment we carried out was also considered in the following recent domain adaptation works: Tzeng et al. “Adversarial Discriminative Domain Adaptation ICCV 2017” and Volpi et al. “Adversarial Feature Augmentation for Unsupervised Domain Adaptation” ArXiv 2017. We believe such experiment adds a considerable value to our work and we would like to maintain it.\nIn any case, after the reviewer’s suggestion, we are now running the Office-31 experiments. Preliminary results on the Amazon->Webcam split are in line with those already in the paper and coherent with the ones published in Sun & Saenko, 2016: Baseline (no adapt) 58.1%, Deep-Coral +5.9%, MECA +8.7% (Note that we use a VGG as a baseline architecture, while Sun & Saenko, 2016 use AlexNet).\n---\n\nQ 1. - I am not sure how the gradients were computed after the eigendecomposition in equation (8). \n\nAs a common practice, we let the software library to automatically compute the gradients along the computation graph, given the fact that the additive regularizer that we wrote is nothing but a differentiable composition of elementary functions such as logarithms and square exponentiation. Although it’s possible to explicitly write down gradients with formulas, such explicit formalism is not of particular interest and we decided to remove such calculations from the paper in order to reduce verbosity.\n", "We thank the reviewer for having read our work with great detail and for the valuable suggestions. We will address all quoted weaknesses (W) and questions (Q) separately.\n\n\nW 1. - The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac’s delta function as the predictions over the target, etc. \n \nIn Theorem 1, by assuming the optimal correlation alignment, we can prove that entropy is minimized (which, ancillary, implies the Dirac’s delta function for the predictions). Under a theoretical standpoint, the strong assumption is balanced by the significant claim we have proved. In practical terms, the reviewers is right in observing that the optimal alignment is not granted for free, and this justifies the choice of a more sound metric for correlation alignment. That’s why we proposed the log-Euclidean distance to make the alignment closer to the optimal one.\n--\n\nW 2. - The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\lamba for geodesic alignment but not for the Euclidean alignment. \n\nAs we showed in Figure 2, in the case of geodesic alignment, entropy minimization always correlate with the optimal performance on the target domain. Since the same does not always happen when an Euclidean metric is used, this is an evidence that Euclidean alignment is not able to achieve an optimal correlation alignment which, in comparison, is better achieved through our geodesic approach. \n--\n\nW 3. - The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e.g., learning rate, early stopping) for the geodesic alignment?\n\nIt does make sense to fine tune the \\lambda by using target entropy since, ultimately, a low entropy on the target is a proxy for a confident classifier whose predictions are peaky. In other words, since \\lambda regulates the effect of the correlation alignment, it also balances the capability of a classifier trained on the source to perform well on the target. Since in our pipeline \\lambda is the only parameter related to domain adaptation, we deem our choice quite natural. In fact, other free parameters (learning rate, early stopping) are not related to adaptation, but to the regular training of the deep neural network, which can be actually determined by using source data only - as we did in our experiments.\n--\n\nW 4. - If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\lambda differ from that by the entropy?\n\nThe availability of a few labeled samples from the target domain would cast the problem into semi-supervised domain adaptation. Instead, our work faces the more challenging unsupervised scenario. \nIndeed, we propose an unsupervised method which lead to the same results of using labelled target samples for validation. This is shown in the top-right of Figure 2: the blue curve accounts for the best target performance, which is computed by means of target test labels - thus not accessible during training. Differently, the red curve can be computed at training time since the entropy criterion is fully unsupervised. \nFigure 2 shows that the proposed criterion is effectively able to select the \\lambda which corresponds to the best target performance that one could achieve if one was allowed to use target label. Notice that the same does not happen for Deep CORAL (bottom-right) - and the reported results for that competitor were done by direct validation on the target.\n--", "We are thankful for the provided comments and we will respond (A) to each query (Q) in detail.\n\n\nQ 1 - (a) Can entropy minimization on target be used with other methods for DA param tuning? (b) Does it require that the model was trained to minimize the geodesic correlation distance between source and target? \n\nA 1 - (a) Let us point out that we are not minimizing entropy on the target as a regularizing training loss, as previous works did (Tzeng et al. 2015, Haeusser et al. 2017 or Carlucci et al. 2017). For the latter methods, entropy cannot be used as a criterion for parameter tuning, since it is one of the quantities explicitly optimized in the problem. Differently, we obtain the minimum of the entropy as a consequence of an optimal correlation alignment. Such criterion could possibly be used for other methods aiming at source-target distribution alignment. (b) Alignment does not *explicitly* require a geodesic distance. However, since the former must be optimal, it cannot be attained with an Euclidean distance, which is the reason why we propose the log-Euclidean one.\n\n\nQ 2. - It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL? \n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\n[2] Raghuraman Gopalan,, Ruonan Li and Rama Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. \n\nA -2 The works [1,2] are kernelized approaches which, by either using Principal Components Analysis [1] or Partial Least Squares [2], a sequence of intermediate embeddings is generated as a smooth transition from the source to the target domain. In [1], such sequence is implicitly computed by means of a kernel function which is subsequently used for classification. In [2], after the source data are projected on hand-crafted intermediate subspaces, classification is performed. \nIn [1] and [2], the necessity for engineering intermediate embeddings is motivated by the need for adapting the fixed input representation so that the domain shift can be solved. As a way to do it, [1] and [2] follow the geodesics on the data manifold. \nIn a very same way, our proposed approach, MECA, follows the geodesics on the manifold (of second order statistics), but, differently, this step is finalized to better guide the feature learning stage. \nFor all these reasons, MECA and [1,2] can be seen as different manners of exploiting geodesic alignment for the sake of domain adaptation.\n\n\nQ 3. - Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach? \n\nA 3 - As we argued in the paper, the performance on SYN to SVHN task is due to the the visual similarity between source and target domain whose relative data distributions are already quite aligned. Also note that TRIPLE already performs better than direct training on the target domain. This could be interpreted as a cue for TRIPLE to perform implicit data augmentation on the source synthetic data (and, indeed, the same could be done in MECA, trying to boost its performance by means of data augmentation). However, when more realistic datasets are used as source, such procedure becomes more difficult to be accomplished and that’s why, on all the other benchmarks, TRIPLE is inferior to MECA in terms of performance.", "We are thankful for the detailed reading and careful evaluation of our work. \n\n\nBy following the proposed suggestion, we added to the Appendix some t-SNE visualizations in which we compare our baseline network with no adaptation against Deep CORAL and MECA on the SVHN to MNIST benchmark. As the we observed, Deep CORAL and MECA achieve a better separation among classes - confirming the quantitative results of Table 1. \n\nMoreover, when looking at the degree of confusion between source and target domain achieved within each digit’s class, we can qualitatively show that MECA is better in “shuffling” source and target data than Deep CORAL, in which the two are close but much more separated. This can be read as an additional, qualitative evidence of the superiority of the proposed geodesic over the Euclidean alignment. \n\nThese considerations and further remarks have been discussed in the revised paper (appendix). " ]
[ -1, 6, 7, 8, -1, -1, -1, -1 ]
[ -1, 5, 5, 4, -1, -1, -1, -1 ]
[ "BkiyM2dgG", "iclr_2018_rJWechg0Z", "iclr_2018_rJWechg0Z", "iclr_2018_rJWechg0Z", "SJJSYrGmG", "BkiyM2dgG", "r15hYW5gM", "SkjcYkCgf" ]
iclr_2018_B1zlp1bRW
Large Scale Optimal Transport and Mapping Estimation
This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.
accepted-poster-papers
This paper is generally very strong. I do find myself agreeing with the last reviewer though, that tuning hyperparameters on the test set should not be done, even if others have done it in the past. (I say this having worked on similar problems myself.) I would strongly encourage the authors to re-do their experiments with a better tuning regime.
train
[ "rJ81OAtgM", "B1cR-6neM", "H1dxvZWWM", "Skh5eWVWG", "HydEb7aQz", "HJ6TkQpmM", "HkgBvuc7z", "SkAHhVRfG", "Hych0z0MG", "rkVI5fAGz", "H122CDsMG", "By5zFb-bG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Quality\nThe theoretical results presented in the paper appear to be correct. However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair.\n\nClarity\nThe paper is mostly clear, even though some parts deserve more discussion/clarification (algorithm, experimental evaluation).\n\nOriginality\nThe theoretical results are original, and the SGD approach is a priori original as well.\n\nSignificance\nThe relaxed dual formulation and OT/Monge maps convergence results are interesting and can of of interest for researchers in the area, the other aspects of the paper are limited.\n\nPros:\n-Theoretical results on the convergence of OT/Monge maps\n-Regularized formulation compatible with SGD\nCons\n-Experimental evaluation limited\n-The large scale aspect lacks of thorough analysis\n-The paper presents 2 contributions but at then end of the day, the development of each of them appears limited\n\nComments:\n\n-The weak convergence results are interesting. However, the fact that no convergence rate is given makes the result weak. \nIn particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential.\nThis can be coherent with the problem of Domain Adaptation that can be NP-hard even under the co-variate shift assumption (Ben-David&Urner, ALT2012).\nThen, I think that the claim of page 6 saying that Domain Adaptation can be performed \"nearly optimally\" has then to be rephrased.\nI think that results show that the approach is theoretically justified but optimality is not here yet.\n\nTheorem 1 is only valid for entropy-based regularizations, what is the difficulty for having a similar result with L2 regularization?\n\n-The experimental evaluation on the running time is limited to one particular problem. If this subject is important, it would have been interesting to compare the approaches on other large scale problems and possibly with other implementations.\nIt is also surprising that the efficiency the L2-regularized version is not evaluated.\nFor a paper interesting in large scale aspects, the experimental evaluation is rather weak.\n \nThe 2 methods compared in Fig 2 reach the same objective values at convergence, but is there any particular difference in the solutions found?\n\n-Algorithm 1 is presented without any discussion about complexity, rate of convergence. Could the authors discuss this aspect?\nThe presentation of this algo is a bit short and could deserve more space (in the supplementary)\n\n-For the DA application, the considered datasets are classic but not really \"large scale\", anyway this is a minor remark.\nThe setup is not completely clear, since the approach is interesting for out of sample data, so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map. This point is not very clear and we do not know how many source instances are used to compute the mapping - the mapping is incomplete on this point while this is an interesting aspect of the paper: this justifies even more the large scale aspect is the algo need less examples during learning to perform similar or even better classification.\nHyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test (for all methods), which is not fair since target label information will not be available from a practical standpoint.\n\nThe authors claim that they did not want to compete with state of the art DA, but the approach of Perrot et al., 2016 seems to a have a similar objective and could be used as a baseline.\n\nExperiments on generative optimal transport are interesting and probably generate more discussion/perspectives.\n\n--\nAfter rebuttal\n--\nAuthors have answered to many of my comments, I think this is an interesting paper, I increase my score.\n", "The paper proves the weak convergence of the regularised OT problem to Kantorovich / Monge optimal transport problems.\n\nI like the weak convergence results, but this is just weak convergence. It appears to be an overstatement to claim that the approach \"nearly-optimally\" transports one distribution to the other (Cf e.g. Conclusion). There is a penalty to pay for choosing a small epsilon -- it seems to be visible from Figure 2. Also, near-optimality would refer to some parameters being chosen in the best possible way. I do not see that from the paper. However, the weak convergence results are good.\n\nA better result, hinting on how \"optimal\" this can be, would have been to guarantee that the solution to regularised OT is within f(epsilon) from the optimal one, or from within f(epsilon) from the one with a smaller epsilon (more possibilities exist). This is one of the things experimenters would really care about -- the price to pay for regularisation compared to the unknown unregularized optimum. \n\nI also like the choice of the two regularisers and wonder whether the authors have tried to make this more general, considering other regularisations ? After all, the L2 one is just an approximation of the entropic one.\n\nTypoes:\n\n1- Kanthorovich -> Kantorovich (Intro)\n2- Cal C <-> C (eq. 4)", "This paper proposes a new method for estimating optimal transport plans and maps among continuous distributions, or discrete distributions with large support size. First, the paper proposes a dual algorithm to estimate Kantorovich plans, i.e. a coupling between two input distributions minimizing a given cost function, using dual functions parameterized as neural networks. Then an algorithm is given to convert a generic plan into a Monge map, a deterministic function from one domain to the other, following the barycenter of the plan. The algorithms are shown to be consistent, and demonstrated to be more efficient than an existing semi-dual algorithm. Initial applications to domain adaptation and generative modeling are also shown.\n\nThese algorithms seem to be an improvement over the current state of the art for this problem setting, although more of a discussion of the relationship to the technique of Genevay et al. would be useful: how does your approach compare to the full-dual, continuous case of that paper if you simply replace their ball of RKHS functions with your class of deep networks?\n\nThe consistency properties are nice, though they don't provide much insight into the rate at which epsilon should be decreased with n or similar properties. The proofs are clear, and seem correct on a superficial readthrough; I have not carefully verified them.\n\nThe proofs are mainly limited in that they don't refer in any way to the class of approximating networks or the optimization algorithm, but rather only to the optimal solution. Although of course proving things about the actual outcomes of optimizing a deep network is extremely difficult, it would be helpful to have some kind of understanding of how the class of networks in use affects the solutions. In this way, your guarantees don't say much more than those of Arjovsky et al., who must assume that their \"critic function\" reaches the global optimum: essentially you add a regularization term, and show that as the regularization decreases it still works, but under seemingly the same kind of assumptions as Arjovsky et al.'s approach which does not add an explicit regularization term at all. Though it makes sense that your regularization might lead to a better estimator, you don't seem to have shown so either in theory or empirically.\n\nThe performance comparison to the algorithm of Genevay et al. is somewhat limited: it is only on one particular problem, with three different hyperparameter settings. Also, since Genevay et al. propose using SAG for their algorithm, it seems strange to use plain SGD; how would the results compare if you used SAG (or SAGA/etc) for both algorithms?\n\nIn discussing the domain adaptation results, you mention that the L2 regularization \"works very well in practice,\" but don't highlight that although it slightly outperforms entropy regularization in two of the problems, it does substantially worse in the other. Do you have any guesses as to why this might be?\n\nFor generative modeling: you do have guarantees that, *if* your optimization and function parameterization can reach the global optimum, you will obtain the best map relative to the cost function. But it seems that the extent of these guarantees are comparable to those of several other generative models, including WGANs, the Sinkhorn-based models of Genevay et al. (2017, https://arxiv.org/abs/1706.00292/), or e.g. with a different loss function the MMD-based models of Li, Swersky, and Zemel (ICML 2015) / Dziugaite, Roy, and Ghahramani (UAI 2015). The different setting than the fundamental GAN-like setup of those models is intriguing, but specifying a cost function between the source and the target domains feels exceedingly unnatural compared to specifying a cost function just within one domain as in these other models.\n\nMinor:\n\nIn (5), what is the purpose of the -1 term in R_e? It seems to just subtract a constant 1 from the regularization term.", "This paper explores a new approach to optimal transport. Contributions include a new dual-based algorithm for the fundamental task of computing an optimal transport coupling, the ability to deal with continuous distributions tractably by using a neural net to parameterize the functions which occur in the dual formulation, learning a Monge map parameterized by a neural net allowing extremely tractable mapping of samples from one distribution to another, and a plethora of supporting theoretical results. The paper presents significant, novel work in a straightforward, clear and engaging way. It represents an elegant combination of ideas, and a well-rounded combination of theory and experiments.\n\nI should mention that I'm not sufficiently familiar with the optimal transport literature to verify the detailed claims about where the proposed dual-based algorithm stands in relation to existing algorithms.\n\nMajor comments:\n\nNo major flaws. The introduction is particular well written, as an extremely clear and succinct introduction to optimal transport.\n\nMinor comments:\n\nIn the introduction, for VAEs, it's not the case that f(X) matches the target distribution. There are two levels of sampling: of the latent X and of the observed value given the latent. The second step of sampling is ignored in the description of VAEs in the first paragraph.\n\nIn the comparison to previous work, please explicitly mention the EMD algorithm, since it's used in the experiments.\n\nIt would've been nice to see an experimental comparison to the algorithm proposed by Arjovsky et al. (2017), since this is mentioned favorably in the introduction.\n\nIn (3), R is not defined. Suggest adding a forward reference to (5).\n\nIn section 3.1, it would be helpful to cite a reference to support the form of dual problem.\n\nPerhaps the authors have just done a good job of laying the groundwork, but the dual-based approach proposed in section 3.1 seems quite natural. Is there any reason this sort of approach wasn't used previously, even though this vein of thinking was being explored for example in the semi-dual algorithm? If so, it would interesting to highlight the key obstacles that a naive dual-based approach would encounter and how these are overcome.\n\nIn algorithm 1, it is confusing to use u to mean both the parameters of the neural net and the function represented by the neural net.\n\nThere are many terms in R_e in (5) which appear to have no effect on optimization, such as a(x) and b(y) in the denominator and \"- 1\". It seems like R_e boils down to just the entropy.\n\nThe definition of F_\\epsilon is made unnecessarily confusing by the omission of x and y as arguments.\n\nIt would be great to mention very briefly any helpful intuition as to why F_\\epsilon and H_\\epsilon have the forms they do.\n\nIn the discussion of Table 1, it would be helpful to spell out the differences between the different Bary proj algorithms, since I would've expected EMD, Sinkhorn and Alg. 1 with R_e to all perform similarly.\n\nIn Figure 4 some of the samples are quite non-physical. Is their any helpful intuition about what goes wrong?\n\nWhat cost is used for generative modeling on MNIST?\n\nFor generative modeling on MNIST, \"784d vector\" is less clear than \"784-dimensional vector\". The fact that the variable d is equal to 768 is not explicitly stated.\n\nIt seems a bit strange to say \"The property we gain compared to other generative models is that our generator is a nearly optimal map w.r.t. this cost\" as if this was an advantage of the proposed method, since arguably there isn't a really natural cost in the generative modeling case (unlike in the domain adaptation case); the latent variable seems kind of conceptually distinct from observation space.\n\nAppendix A isn't referred to from the main text as far as I could tell. Just merge it into the main text?\n\n\n\n", "We have added the following remark in the paragraph \"Algorithm\" of page 4 of the updated submission:\n\"Genevay et al. (2016) used the same stochastic dual maximization approach to compute the regularized OT objective in the continuous-continuous setting. The difference lies in their pamaterization of the dual variables as kernel expansions, while we decide to use deep neural networks.\"\n\nWe also discussed their cost per iteration, compared to ours, in the paragraph \"\nConvergence rates and computational cost comparison.\" when referring to the continuous-continuous setting.\n\nThank you for pointing this out.\n", "We would like to thank you again for your in-depth review of our submission. Your detailed comments and questions have helped us improve the manuscript and we hope the updated version fulfills your recommendations.\n\nIn this research, we have tackled a complicated and open problem which is the computation of optimal transport and optimal mappings in high dimensions for measures on a large (or even continuous) support. This has been possible through the recent developments of optimal transport theory and machine learning techniques.\n\nWe have carefully read and acknowledge your remarks but are not able to positively reply to all, and may keep if as future work:\n- First, about the Algorithm 1, we agree that a more thorough numerical convergence analysis on more datasets would be of great interest. After careful thinking, we believe this rigorous analysis requires implementations in a similar framework as well as a variety of datasets that encompasses several ground space dimensions and sizes of distributions. We choose to postpone this study to further analysis and works, as it is not in our opinion the main direction of our paper, which rather focus on the learning of optimal maps.\n- Second, we understand convergence rates of discrete regularized OT plans would be a nice addition to our convergence results. As mentioned in the discussion, convergence of discrete OT plan (and not only the OT objective) has not been studied, to our knowledge, in the literature. We believe our convergence results are a first step in that direction.\n\nLet us thank you again for your careful reviewing and interesting discussion.\n\nBest wishes,\nThe authors.", "Thanks for your replies.\n\nI would strongly recommend clarifying the relationship to the full-dual case of Genevay et al. in the paper. To a reader not intimately familiar with the previous work, it reads as if your dual formulation is also novel, whereas really you're mainly proposing replacing the RKHS with a neural network (and so achieving much better results).", "Dear reviewer,\n\nWe thank you for your positive review and have updated the paper accordingly.\n\n“-The weak convergence results are interesting. However, the fact that no convergence rate is given makes the result weak. […] approximation is at least exponential.”\n\nWe thank Reviewer4 for the clarification and reference. Indeed, we expect that the number of samples to achieve a given error on the OT plans grows exponentially with the dimension since it was proven in the case of the OT objective (Boissard (2011), Sriperumbudur et al. (2012), Boissard & Le Gouic (2014)), and we expect the behavior is at least as ‘bad’ for the convergence of OT plans. An interesting line of research, mentioned in conclusion of Weed and Bach (2017) is to investigate whether regularization helps improve these rates. \n\nRegarding the convergence rates of empirical OT plans, we believe this is an interesting but complex topic which deserves a study in its own right. To our knowledge, there are works proving convergence rates of the empirical OT objective (see ref. above), but none about convergence rates of OT plans.\n\n“[...]DA can be performed \"nearly optimally\" has then to be rephrased.”\n\nWe agree and have rephrased accordingly.\n\n“Theorem 1[...], what is the difficulty for having a similar result with L2 regularization?”\n\nOur proofs rely partly on asymptotic convergence rates of entropy-reg linear programs established by Cominetti & Saint Martin (1994). To our knowledge, no extension has been obtained for L2-reg linear programs, which prevents us from adapting our proofs. Extending these results to the L2 case would be indeed of great interest.\n\n“-The experimental evaluation on the running time is limited […]. It is also surprising that the efficiency the L2-regularized version is not evaluated. […], the experimental evaluation is rather weak.”\n\nNo algorithm for computing the L2-reg OT in large-scale or continuous settings have been proposed. Hence, we do not know other algorithms to compare with.\n\nWe mention that our experiments are large-scale considering the OT problem. For ex., Genevay et al. (2016) considered measures supported on 20k samples, while measures in our numerical-speed xp have 250k samples. However, we agree that more numerical-speed xps would make our proposed Alg. 1 more convincing and will add experiments.\n \n“The 2 methods compared in Fig 2 [...], is there any particular difference in the solutions found?”\n\nWe performed speed-comparison experiments in the discrete setting, where the dual objective is strictly concave with a unique solution. The semi-dual objective is also strictly concave, and the dual variable solution of the semi-dual is the same as the first dual variable of the dual problem.\n\n“-Algorithm 1 is presented without any discussion about complexity, rate of convergence.”\n\nWe agree and have added a paragraph “Convergence rates and computational cost comparison”.\n\n“The setup is not completely clear, since the approach is interesting for out of sample data, so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map. […] needs less examples during learning to perform similar or even better classification.”\n\nOne of our contribution is indeed to allow out-of-sample prediction which avoids learning again a full transport map if one dataset is augmented. But learning a Monge map is a very difficult problem and one should use all the available data, which is now possible thanks to our proposed stochastic algorithms. The fact that Perrot et al. (2016) used at most 1000 samples was due to the numerical complexity of the mapping estimation alg.\n\n“Hyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test […].”\n\nThe parameter validation tuned on test is indeed unrealistic because we have indeed not access to target samples labels in practice. Still we believe it is reasonable and fair since it allows all methods to work at their best, without relying on approximate validation that might benefit one method over another. Note that unsupervised DA validation is still an open problem: some authors perform as we did; or do validation using labels; others do more realistic but less stable techniques such as circular validation.\n\n“The authors claim that they did not want to compete with state of the art DA, but the approach of Perrot et al., 2016 seems to a have a similar objective and could be used as a baseline.”\n\nWe cannot compare fairly to Perrot et al. (2016)  since they used a very small number of sample to estimate a map. But this would be a good baseline to show the importance of learning with a many samples. The method will be added to the xps very soon.\n\nReference not in the paper:\n-Weed, Jonathan, and Francis Bach. \"Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance.\" arXiv\n", "Dear reviewer,\n\nWe thank you for your positive review and relevant comments.\n\n\"I like the weak convergence results, but this is just weak convergence. It appears to be an overstatement to claim that the approach \"nearly-optimally\" transports one distribution to the other (Cf e.g. Conclusion). There is a penalty to pay for choosing a small epsilon -- it seems to be visible from Figure 2. Also, near-optimality would refer to some parameters being chosen in the best possible way. I do not see that from the paper. However, the weak convergence results are good.\"\n\nTheorem 1. proves weak convergence of regularized discrete plans. This is a natural convergence for random variables (we emphasize that weak convergence is equivalent to the convergence w.r.t., for instance, the Wasserstein distance). Regarding the convergence of Monge maps (in Theorem 2), other types of convergence, such as convergence in probability, would be of great interest indeed. We may consider this problem in some future work. \n\nUsing the term 'nearly-optimality' was indeed vague as we have not defined what ‘nearly’ means. We have removed this expression from the the paper. Otherwise, ‘optimal’ or ‘optimality’ refers to a solution of either the Monge problem (1), the OT problem (2), or the regularized OT problem (3).\n\n“A better result, hinting on how \"optimal\" this can be, would have been to guarantee that the solution to regularised OT is within f(epsilon) from the optimal one, or from within f(epsilon) from the one with a smaller epsilon (more possibilities exist). This is one of the things experimenters would really care about -- the price to pay for regularisation compared to the unknown unregularized optimum.”\n\nWe can indeed consider two cases when to measure how a solution to the regularized OT (ROT) problem is ‘optimal’:\n- How close in the solution of ROT to the solution of OT w.r.t. a given norm: in the discrete case, the paper of Cominetti & Saint Martin (1994) proves asymptotic exponential convergence rate for the entropic regularization case. We are not aware of similar result for the L2 regularization, which would be of great interest and deserves a study in its own right. In the continuous-continuous case, the recent paper from Carlier et al. (2017) only provides convergence results of entropy-regularized plans.\n- How optimal is the OT objective computed with the solution of ROT: in that case various bounds about the ROT objective compared to the OT objective can be used. See for example Blondel et al. (2017) which provides bounds for both entropic and L2 regularizations.\n\n“I also like the choice of the two regularisers and wonder whether the authors have tried to make this more general, considering other regularisations ? After all, the L2 one is just an approximation of the entropic one.”\n\nThis is indeed be possible. To extend our approach seamlessly, it would be sufficient that the regularizer R verifies: convexity, which ensures that the dual is well defined and unconstrained, and decomposability, which provides a dual of the form Eq. (6). More details are given in Blondel et al. (2017). We have added a small discussion about it in the main text of the updated paper, in the paragraph \"Regularized OT dual\".\n\n“Typoes:\n1- Kanthorovich -> Kantorovich (Intro)\n2- Cal C <-> C (eq. 4)”\n\nThis has been corrected, thank you.\n\nReferences not in the paper:\nCarlier, Guillaume, et al. \"Convergence of entropic schemes for optimal transport and gradient flows.\" SIAM Journal on Mathematical Analysis 49.2 (2017): 1385-1418.", "Dear reviewer,\n\nWe thank you for your positive review and detailed comments.\n\n\"how does your approach compare to the full-dual, continuous case of that paper [..]”\n\nConceptually there is no difference. The main advantage of using NNs lies in the implementation side: using kernel expansions has a O((iteration index)^2) cost per iterations, while using NNs keeps a constant O(batch size) cost. \n\nWe have added a paragraph “Convergence rates and computational cost comparison”.\n\n\"The consistency properties are nice, though they don't provide much insight into the rate [..].”\n\n-For a fixed number of samples and the reg. decreasing to 0: Cominetti & Saint Martin (1994) proved an exponential rate for the convergence of the entropy-reg. OT plans to a non-regularized OT plan. This asymptotic result does not let infer a regularization value to achieve a given error. Building on top of these results would deserve a study in its own right.\n\n-When reg. is fixed (or 0), and the number of samples grows to inf.: Several works study convergence rates of empirical Wasserstein distances (i.e. the OT objective between empirical measures). Boissard (2011), Sriperumbudur et al. (2012) (thanks to reviewer 4 for this ref.), Boissard & Le Gouic (2014), to name a few. However we are not aware of work addressing the same questions for the empirical OT plans (and not just the OT objective). We believe this problem is more complicated.\n\nSince our results relate to the convergence of OT plans, we believe they are new and of interest. Without them, our discussion in the introduction and experiments would not be theoretically well grounded: we could not justify that the image of a source measure through the learned Monge map approximates well the target measure, at least for some n big and eps small (Corollary 1). We understand that convergence rates are more useful and will investigate this in future work.\n\n\"The proofs are mainly limited in that they don't refer in any way to the class of approximating networks [...] essentially you add a regularization term, and show that as the regularization decreases it still works, but under seemingly the same kind of assumptions as Arjovsky et al.'s approach which does not add an explicit regularization term at all. [...]\"\n\nIn a discrete setting, our Alg. 1 computes the exact regularized-OT since we are maximizing a concave objective without parameterizing the dual variables.\n\nWhenever the problem involves continuous measure(s), our NN parameterization only gives the exact solution when the latter belongs to this approximating class of NNs (and the global maximum is obtained).\n\nAs you wrote, we do believe that this parameterization provides a “smoother” solution. But as we already have some entropic or L2 regularization in the OT problem, we find it complicated to analyze. Still, we agree that this is an interesting problem to investigate.\n \nArjovsky et al. used indeed the same idea of deep NN parameterization. However, their NN has to Lipschitz, which they enforce by weights clipping. This is unclear whether a NN with bounded weights can approximate any Lipschitz function. In our case, there is no restriction on the type of NNs.\n\n\"The performance comparison to the algorithm of Genevay et al. is somewhat limited [...] how would the results compare if you used SAG (or SAGA/etc) for both algorithms?\"\n\nWe plan to add numerical-speed experiments in the paper soon.\n\nGenevay et al. used SAG in the discrete setting (but used SGD in other settings). We prefer 1) providing a unified alg. regardless of the measures being discrete or continuous, 2) proposing an alg. which fits in automatic-differentiation softwares (Tensorflow, Torch etc.), which often do not support SAG.\n\n\"In discussing the domain adaptation results, you mention that the L2 regularization \"works very well in practice,\" [...].\"\n\nWe have removed this sentence. It is still unclear which regularization works better in practice depending on the problem. Our only claim is that the L2 reg. is numerically more stable than the entropic one.\n\n\"For generative modeling: you do have guarantees that, *if* [...] but specifying a cost function between the source and the target domains feels exceedingly unnatural […].”\n\nIndeed, most generative models focus on fitting a generator to a target distribution, without optimality criteria. Yet we believe that looking for generator which has properties can be useful for some applications. We see this experiment as a proof-of-concept that the learned Monge map can be good generator. We encourage and will consider future work where optimality of the generator (w.r.t. to a cost) is important (such as image-to-image / text-to-text translations).\n\n\"In (5), what is the purpose of the -1 term in R_e? It seems to just subtract a constant 1 from the regularization term.”\n\nYou are right. It provides a simpler formulation in the primal-dual relationship Eq. (7) (this is also used in Genevay et al (2016), Peyré (2016)).", "Dear reviewer,\n\nThank you very much for your positive review and detailed comments. Please find below our replies to your comments.\n\n\"In the introduction, for VAEs, it's not the case that f(X) matches the target distribution. [...] The second step of sampling is ignored in the description of VAEs in the first paragraph.\"\n\nAt training time, there are indeed two neural networks involved in the VAE model, one encoder and one decoder. Here, we refer to X as the latent variable, i.e. the distribution obtained by the image of the input data by the encoder. We hence refer to f as the decoder network. With these notations, we believe that f is learned so that f(X) matches the distribution of the input data.\n\n\"In the comparison to previous work, please explicitly mention the EMD algorithm, since it's used in the experiments.\"\n\nWe used a c++ implementation of the network simplex algorithm (http://liris.cnrs.fr/~nbonneel/FastTransport/). We have added this link as a footnote.\n\n\"It would've been nice to see an experimental comparison to the algorithm proposed by Arjovsky et al. (2017), since this is mentioned favorably in the introduction.\"\n\nOur algorithm shares indeed similarities with the one proposed by Arjovsky et al. (2017): they both use NN parameterizations of the OT dual variables. Both our algorithms have the same complexity. However, in our case, we compute regularized OT, while Arjovsky et al. (2017) unregularized OT. Hence, we found it more relevant to compare to Genevay et al. (2016) who computed exactly the same objective as us (in the entropy reg. case).\n\n\"Is there any reason this sort of approach wasn't used previously, even though this vein of thinking was being explored for example in the semi-dual algorithm?\"\n\nLet us emphasize that the simplex algorithm is an efficient OT solver for measures supported up to a few thousands samples, and may be suitable in many applications. \n\nIt seems that the need to compute OT in large-scale settings is largely driven by the machine-learning community, with the recent idea that the OT objective can be a powerful loss function (Rolet et al. (2016), Arjovsky et al. (2017)), as well as the OT plans can be used to perform domain adaptation (Courty et al. (2016).\n\nMoreover, our dual approach is simple thanks to the convex regularization of the primal OT problem, which was also introduced relatively recently (Cuturi (2013)).\n\nFinally, our approach is not flawless: the use of deep NN makes the problem non-convex (in the semi-discrete and continuous-continuous cases). \n\n\"In algorithm 1, it is confusing to use u to mean both the parameters of the neural net and the function represented by the neural net.\"\n\nWe wanted to emphasize that our algorithm is conceptually the same in all settings (discrete-discrete, semi-discrete and continuous-continuous). We are thinking of better notations to make it less confusing. \n\n\"There are many terms in R_e in (5) which appear to have no effect on optimization, such as a(x) and b(y) in the denominator and \"- 1\". It seems like R_e boils down to just the entropy.\"\n\nWe have removed a and b from the text. We can remove the -1 in the entropy regularizer, but 1) it would make the primal-dual relationship less ‘simple’ and 2) this would not be in line with the work of Genevay et al. (2016) or Peyré (2016).\n\n\"I would've expected EMD, Sinkhorn and Alg. 1 with R_e to all perform similarly.\"\n\nWe had a typo in the result of the “Bar. proj. Alg. 1 R_e” case. We have rerun the experiment and it indeed performs similarly as Sinkhorn, as expected. We apologize for that. \nEMD (i.e. non-regularized OT) is not expected to perform as Sinkhorn since the regularization has an effect of the OT plan and hence on the barycentric projection.\n\n\"In Figure 4 some of the samples are quite non-physical. Is their any helpful intuition about what goes wrong?\"\n\nThe barycentric projection performs an averaging (w.r.t. the squared Euclidean metric which was chosen in that xp) between target samples (weighted according to the optimal plan). In some case, this averaging might lead to so these non-physical shapes.\n\n\"What cost is used for generative modeling on MNIST?\"\n\nWe used the squared Euclidean distance.\n\n\"It seems a bit strange to say \"The property we gain compared to other generative models is that our generator is a nearly optimal map w.r.t. this cost\" [...] the latent variable seems kind of conceptually distinct from observation space.\"\n\nIndeed, most generative models do not need to have an 'optimal' the generator. Yet we believe that looking for map which has certain (regularity) properties can be useful for some applications. We may consider further work in generative modeling where optimality of the mapping (w.r.t. to a given cost) can be important (such as image-to-image translation).\n\nReferences:\nPeyré, Gabriel. \"Entropic approximation of Wasserstein gradient flows.\" SIAM Journal on Imaging Sciences 8.4 (2015): 2323-2351.", "\"In particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential.\"\n\nThe direct empirical Wasserstein estimator actually has a rate exponential in the input dimension (Sriperumbudur et al. 2012, On the empirical estimation of integral probability metrics, Corollary 3.5 – this is an upper bound and I don't know if there's a known matching lower bound, but I think it's relatively accepted that this is the case), so it's probably likely that this estimator's rate would be as well in the general nonparametric case. " ]
[ 7, 6, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1zlp1bRW", "iclr_2018_B1zlp1bRW", "iclr_2018_B1zlp1bRW", "iclr_2018_B1zlp1bRW", "HkgBvuc7z", "iclr_2018_B1zlp1bRW", "rkVI5fAGz", "rJ81OAtgM", "B1cR-6neM", "H1dxvZWWM", "Skh5eWVWG", "rJ81OAtgM" ]
iclr_2018_ryUlhzWCZ
TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING
In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance. Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning. Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal. We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal.
accepted-poster-papers
This paper proposes a theoretically-motivated method for combining reinforcement learning and imitation learning. There was some disagreement amongst the reviewers, but the AC was satisfied with the authors' rebuttal.
train
[ "BJBWMqqlf", "H1JzYwcxM", "H16Rrvtlz", "H1rlRAfEG", "ryI0uS6mM", "r13LB1_Qz", "Hya0QJ_XG", "H1yFfkuXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a new theoretically-motivated method for combining reinforcement learning and imitation learning for acquiring policies that are as good as or superior to the expert. The method assumes access to an expert value function (which could be trained using expert roll-outs) and uses the value function to shape the reward function and allow for truncated-horizon policy search. The algorithm can gracefully handle suboptimal demonstrations/value functions, since the demonstrations are only used for reward shaping, and the experiments demonstrate faster convergence and better performance compared to RL and AggreVaTeD on a range of simulated control domains. The paper is well-written and easy to understand.\n\nMy main feedback is with regard to the experiments:\nI appreciate that the experiments used 25 random seeds! This provides a convincing evaluation.\nIt would be nice to see experimental results on even higher dimensional domains such as the ant, humanoid, or vision-based tasks, since the experiments seem to suggest that the benefit of the proposed method is diminished in the swimmer and hopper domains compared to the simpler settings.\nSince the method uses demonstrations, it would be nice to see three additional comparisons: (a) training with supervised learning on the expert roll-outs, (b) initializing THOR and AggreVaTeD (k=1) with a policy trained with supervised learning, and (c) initializing TRPO with a policy trained with supervised learning. There doesn't seem to be any reason not to initialize in such a way, when expert demonstrations are available, and such an initialization should likely provide a significant speed boost in training for all methods.\nHow many demonstrations were used for training the value function in each domain? I did not see this information in the paper.\n\nWith regard to the method and discussion:\nThe paper discusses the connection between the proposed method and short-horizon imitation and long-horizon RL, describing the method as a midway point. It would also be interesting to see a discussion of the relation to inverse RL, which considers long-term outcomes from expert demonstrations. For example, MacGlashn & Littman propose a midway point between imitation and inverse RL [1].\nTheoretically, would it make sense to anneal k from small to large? (to learn the most effectively from the smallest amount of experience)\n\n[1] https://www.ijcai.org/Proceedings/15/Papers/519.pdf\n\n\nMinor feedback:\n- The RHS of the first inequality in the proof of Thm 3.3 seems to have an error in the indexing of i and exponent, which differs from the line before and line after\n\n**Edit after rebuttal**: I have read the other reviews and the authors' responses. My score remains the same.", "=== SUMMARY ===\n\nThe paper considers a combination of Reinforcement Learning (RL) and Imitation Learning (IL), in the infinite horizon discounted MDP setting.\nThe IL part is in the form of an oracle that returns a value function V^e, which is an approximation of the optimal value function. The paper defines a new cost (or reward) function based on V^e, through shaping (Eq. 1). It is known that shaping does not change the optimal policy.\n\nA key aspect of this paper is to consider a truncated horizon problem (say horizon k) with the reshaped cost function, instead of an infinite horizon MDP.\nFor this truncated problem, one can write the (dis)advantage function as a k-step sum of reward plus the value returned by the oracle at the k-th step (cf. Eq. 5).\nTheorem 3.3 shows that the value of the optimal policy of the truncated MDP w.r.t. the original MDP is only O(gamma^k eps) worse than the optimal policy of the original problem (gamma is the discount factor and eps is the error between V^e and V*).\n\nThis suggests two things: \n1) Having an oracle that is accurate (small eps) leads to good performance. If oracle is the same as the optimal value function, we do not need to plan more than a single step ahead.\n2) By planning for k steps ahead, one can decrease the error in the oracle geometrically fast. In the limit of k —> inf, the error in the oracle does not matter.\n\nBased on this insight, the paper suggests an actor-critic-like algorithm called THOR (Truncated HORizon policy search) that minimizes the total cost over a truncated horizon with a modified cost function.\n\nThrough a series of experiments on several benchmark problems (inverted pendulum, swimmer, etc.), the paper shows the effect of planning horizon k.\n\n\n\n=== EVALUATION & COMMENTS ===\n\nI like the main idea of this paper. The paper is also well-written. But one of the main ideas of this paper (truncating the planning horizon and replacing it with approximation of the optimal value function) is not new and has been studied before, but has not been properly cited and discussed.\n\nThere are a few papers that discuss truncated planning. Most closely is the following paper:\n\nFarahmand, Nikovski, Igarashi, and Konaka, “Truncated Approximate Dynamic Programming With Task-Dependent Terminal Value,” AAAI, 2016.\n\nThe motivation of AAAI 2016 paper is different from this work. The goal there is to speedup the computation of finite, but large, horizon problem with a truncated horizon planning. The setting there is not the combination of RL and IL, but multi-task RL. An approximation of optimal value function for each task is learned off-line and then used as the terminal cost. \nThe important point is that the learned function there plays the same role as the value provided by the oracle V^e in this work. They both are used to shorten the planning horizon. That paper theoretically shows the effect of various error terms, including terms related to the approximation in the planning process (this paper does not do that).\n\nNonetheless, the resulting algorithms are quite different. The result of this work is an actor-critic type of algorithm. AAAI 2016 paper is an approximate dynamic programming type of algorithm.\n\nThere are some other papers that have ideas similar to this work in relation to truncating the horizon. For example, the multi-step lookahead policies and the use of approximate value function as the terminal cost in the following paper:\n\nBertsekas, “Dynamic Programming and Suboptimal Control: A Survey from ADP to MPC,” European Journal of Control, 2005.\n\nThe use of learned value function to truncate the rollout trajectory in a classification-based approximate policy iteration method has been studied by\n\nGabillon, Lazaric, Ghavamzadeh, and Scherrer, “Classification-based Policy Iteration with a Critic,” ICML, 2011.\n\nOr in the context of Monte Carlo Tree Search planning, the following paper is relevant:\n\nSilver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, 2016.\n\nTheir “value network” has a similar role to V^e. It provides an estimate of the states at the truncated horizon to shorten the planning depth.\n\nNote that even though these aforementioned papers are not about IL, this paper’s stringent requirement of having access to V^e essentially make it similar to those papers.\n\n\nIn short, a significant part of this work’s novelty has been explored before. Even though not being completely novel is totally acceptable, it is important that the paper better position itself compared to the prior art.\n\n\nAside this main issue, there are some other comments:\n\n\n- Theorem 3.1 is not stated clearly and may suggest more than what is actually shown in the proof. The problem is that it is not clear about the fact the choice of eps is not arbitrary.\nThe proof works only for eps that is larger than 0.5. With the construction of the proof, if eps is smaller than 0.5, there would not be any error, i.e., J(\\hat{pi}^*) = J(pi^*).\n\nThe theorem basically states that if the error is very large (half of the range of value function), the agent does not not perform well. Is this an interesting case?\n\n\n- In addition to the papers I mentioned earlier, there are some results suggesting that shorter horizons might be beneficial and/or sufficient under certain conditions. A related work is a theorem in the PhD dissertation of Ng:\n\nAndrew Ng, Shaping and Policy Search in Reinforcement Learning, PhD Dissertation, 2003.\n(Theorem 5 in Appendix 3.B: Learning with a smaller horizon).\n\nIt is shown that if the error between Phi (equivalent to V^e here) and V* is small, one may choose a discount factor gamma’ that is smaller than gamma of the original MDP, and still have some guarantees. As the discount factor has an interpretation of the effective planning horizon, this result is relevant. The result, however, is not directly comparable to this work as the planning horizon appears implicitly in the form of 1/(1-gamma’) instead of k, but I believe it is worth to mention and possibly compare.\n\n- The IL setting in this work is that an oracle provides V^e, which is the same as (Ross & Bagnell, 2014). I believe this setting is relatively restrictive as in many problems we only have access to (state, action) pairs, or sequence thereof, and not the associated value function. For example, if a human is showing how a robot or a car should move, we do not easily have access to V^e (unless the reward function is known and we estimate the value with rollouts; which requires us having a long trajectory). This is not a deal breaker, and I would not consider this as a weakness of the work, but the paper should be more clear and upfront about this.\n\n\n- The use of differential operator nabla instead of gradient of a function (a vector field) in Equations (10), (14), (15) is non-standard.\n\n- Figures are difficult to read, as the colors corresponding to confidence regions of different curves are all mixed up. Maybe it is better to use standard error instead of standard deviation.\n\n\n===\nAfter Rebuttal: Thank you for your answer. The revised paper has been improved. I increase my score accordingly.\n", "This work proposes to use the value function V^e of some expert policy \\pi^e in order to speed up learning of an RL agent which should eventually do better than the expert. The emphasis is put on using k-steps (with k>1) Bellman updates using bootstrapping from V^e. \n\nIt is claimed that the case k=1 does not allow the agent to outperform the expert policy, whereas k>1 does (Section 3.1, paragraph before Lemma 3.2).\n\nI disagree with this claim. Indeed a policy gradient algorithm (similar to (10)) with a 1-step advantage c(s,a) + gamma V^e(s_{t+1}) - V^e(s_t) will converge (say in the tabular case, or in the case you consider of a rich enough policy space \\Pi) to the greedy policy with respect to V^e, which is strictly better than V^e (if V^e is not optimal). So you don’t need to use k>1 to improve the expert policy. Now it’s true that this will not converge to the optimal policy (since you keep bootstrapping with V^e instead of the current value function), but neither the k-step advantage will. \n\nSo I don’t see any fundamental difference between k=1 and k>1. The only difference being that the k-step bootstrapping will implement a k-step Bellman operator which contracts faster (as gamma^k) when k is large. But the best choice of k has to be discussed in light of a bias-variance discussion, which is missing here. So I find that the main motivation for this work is not well supported. \n\nAlgorithmic suggestion:\nInstead of bootstrapping with V^e, why not bootstrap with min(V^e, V), where V is your current approximation of the value function. In that way you would benefit from (1) fast initialization with V^e at the beginning of learning, (2) continual improvement once you’ve reached the performance of the expert. \n\nOther comments:\nRequiring that we know the value function of the expert on the whole state space is a very strong assumption that we do not usually make in Imitation learning. Instead we assume we have trajectories from expert (from which we can compute value function along those trajectories only). Generalization of the value function to other states is a hard problem in RL and is the topic of important research.\n\nThe overall writing lacks rigor and the contribution is poor. Indeed the lower bound (Theorem 3.1) is not novel (btw, the constant hidden in the \\Omega notation is 1/(1-gamma)). Theorems 3.2 and 3.3 are not novel either. Please read [Bertsekas and Tsitsiklis, 96] as an introduction to dynamic programming with approximation.\n\nThe writing could be improved, and there are many typos, such as:\n- J is not defined (Equation (2))\n- Why do you call A a disadvantage function whereas this quantity is usually called an advantage?\n- You are considering a finite (ie, k) horizon setting, so the value function depend on time. For example the value functions defined in (11) depend on time. \n- All derivations in Section 4, before subsection 4.1 are very approximate and lack rigor.\n- Last sentence of Proof of theorem 3.1. I don’t understand H -> 2H epsilon. H is fixed, right? Also your example does not seem to be a discounted problem.\n", "Thank you for the response.\n\nRegarding experiments on higher-dimensional domains, I think the paper would benefit from adding this discussion, aiding the reader in understanding this potential limitation.\n\nInformation about the number of demonstrations should also be added to the paper.", "We thank the reviewers for constructive feedback and we have revised our paper based on the suggestions from the reviewers. Below we summarize the main changes we made to the paper:\n\n1. We added a new paragraph at the end of the introduction section to summarize the related work and our contributions. We hope this will better position our work. \n\n2. In Introduction, we clarified that previous imitation learning approach---AggreVaTe, can outperform an imperfect expert as well, but with only one step deviation improvement. We also emphasized by how much our approach can improve over AggreVaTe at the end of Theorem 3.2. \n\n3. We updated Theorem 3.1 and Theorem 3.2 to include the dependency on 1/(1-\\gamma) in the big O notation. \n\n4. We updated the proof in Appendix so that it works for discounted and infinite horizon MDP, though the main strategy is not changed. We also added a small paragraph at the end of Theorem 3.1 to illustrate the high-level strategy of the proof. \n", "We thank the reviewer for constructive feedback, below R stands for Reviewer and A stands for our answer\n\nR: K=1 VS K > 1: \n\nA: We do not agree with the reviewer on this point. First, when k=infinity, we can find the optimal policy (under the assumption that we can fully optimize the MDP, of course) which will be significantly better than an imperfect expert pi^e. While we agree that when K=1, the greedy policy with respect to V^e can outperform the expert, the greedy policy is only a *one-step* deviation improvement of V^e. If pi^e is far away from optimality, the one-step improvement greedy policy will likely be far away as well. This is shown in the lower bound analysis. Our main theorem clearly shows that as k increases, the learned policy is getting closer and closer to optimality. Combine the lower bound on the performance of the one-step greedy policy, and the upper bound on the performance of the learned policy with k>1, and it is clear that the learned policy under k>1 is closer to optimality. In summary, we are emphasizing that with k>1, we can learn a policy that is even better than the greedy policy with respect Q^e (not just pi^e), which is the best one can learn if one uses previous algorithm such as AggreVaTe. \n\nWe have clarified this point in the revised version of the paper. \n\nR: Regarding bias-variance tradeoff\n\nUnlike previous work (e.g., TD learning with k-step look ahead), we are using V^e as the termination value instead of V^pi. The main argument of our work is that by reasoning k steps into the future with V^e as the termination value, we are trading between learning complexity and the optimality. When k = 1, we are greedy with respect to the one-step cost, but the downside is that we can only hope that the learned policy is, at best, a one-step deviation improvement over the expert. When k > 1, but less than infinity, we are essentially solving a MDP that is easier than the original MDP due to the shorter horizon, but with the benefit of learning a policy that is moving closer to the optimality than the greedy policy with respect to Q^e. In our theorem, gamma^k does not merely serve as a contraction factor as it did in, for example, TD learning. gamma^k here serves as a measure on how close the learned policy is to the optimal policy. \n\nR: “Requiring know the value function of the expert on the whole state space is a very strong assumption...”\n\nWe agree with the reviewer. In our experiments, we learned V^e from a set of demonstrations and then used the learned V^e. As we showed, previous work AggreVaTe(D) performs poorly in this setting, as V^e may only be an accurate oracle in the state space of the expert’s demonstrations. For challenging problems with very large state action spaces, we need to assume that \\pi^e exists so that V^e can be estimated by rollouts (the same assumption DAgger used (Ross & Bagnell 11, AISTATS)). Luckily, this kind of oracles do exist in practice via access to a simulator, a search algorithm, even in real robotics applications (e.g., see Choudhury et.al, 17, ICRA, Pan et.al, 17), and in natural language processing, where the ground-truth label information can be leveraged to construct experts (Chang et al, 15, ICML, Sun et al, 17 ICML). In these applications, we cannot guarantee the constructed “expert” is globally optimal, hence our results can be directly applied. \n\nR: \"J is not defined..\":\n\nWe thank the reviewer for pointing this out. We have revised the draft accordingly. \n\nR: proof of theorem 3.1\n\nWe thank the reviewer for pointing out the confusion. Although the general proof strategy is not changed, we have revised the proof and also changed it to the setting with discount factor and infinite horizon to make it consistent with the main setting in the paper. \n", "We thank the reviewer for constructive feedback, below R stands for Reviewer and A stands for our answer\n\nR: “Regarding to some previous work using truncated horizon”:\n\nA: we gratefully thank the reviewer for pointing out all of these related previous works. We agree with the reviewer that the idea of using truncated horizon has been explored before. Following the suggestions from the reviewer, we have revised the paper to better position our work. Please see Sec 1.1 in the revision for a discussion on related work and our contributions. \n\nSome of the previous work that uses the idea of truncating horizon mainly focuses on using V^{\\pi} as the termination value for bias-variance tradeoff. We used V^e, instead of V^{\\pi}. We think using V^{e} with truncated horizon allows us to interpolate between pure IL and full RL and trades between sample complexity and the optimality: with k=1, the reshaped MDP is easy to solve as it's one-step greedy and we reveal IL algorithm AggreVaTeD, but at the cost of only learning a policy that has similar performance as the expert. When k > 1, we face a MDP that is between the one-step greedy MDP and the original full horizon MDP. Solving a truncated horizon MDP with k > 1 is harder than the one-step greedy MDP, but at the benefit of outperforming the expert and getting closer to optimality. Another contribution of the paper is an efficient actor-critic like algorithm for continuous MDPs, which is not available in previous work (e.g., Ng's thesis work)\n\nR: \"Theorem 5 in Appendix 3.B in Ng's PhD dissertation\": \n\nA: Again, we gratefully thank the reviewer for pointing out this theorem that we were not aware of! We agree with the reviewer, a smaller discount factor in theory is equivalent to a shorter planning horizon. One of the advantages of explicitly using a truncated horizon is for reducing computational complexity, as mentioned by some other previous work. Though we agree that our main theorem is similar to Theorem 5 in the appendix of Ng's dissertation, we would like to emphasize that one of the contributions of our work is that we interpret expert's value function as a potential function and this could help us explain and generalize previous imitation learning and bridge the gap between IL and RL. \n\nR: the choice of \\eps in the proof of theorem 3.1\n\nA: We believe we can construct similar MDPs to make \\eps smaller by increasing the number of actions. We believe we can make \\eps small around 1/|A|, where |A| is the number of actions. The main idea is that we want to construct a MDP where for each state s, the value A^e(s,a) itself is close to the value A*(s,a), but the order on actions induced by A^e is different from the order of actions induced by A*(s,a), forcing the greedy policy with respect to A^e to make mistakes. \n\n", "We thank the reviewer for constructive feedback. Below R stands for Reviewer and A stands for our answers.\n\nR: “Experiment on higher-dimensional domains”:\n\nA: In our experiments, we investigated the option of using a V^e pre-trained from expert demonstrations. In higher-dimensional tasks, training a globally accurate value function purely from a batch of expert demonstrations is difficult due to the large state space and the fact that the expert demonstrations only cover a tiny part of the state space. This is the main reason we think the Swimmer and Hopper experiment did not show the clear advantage of our approach. For higher-dimensional tasks, we believe we need a stronger assumption on the availability of the expert. For example, we may require oracles to exist in the training loop (similar to the assumptions used in previous IL work such as DAgger (Ross et.al, 2011)) so that we can estimate and query V^e(s) on the fly. This kind of oracle can exist in practice: for example access to a simulator, a search algorithm, and in real robotics applications (e.g., see Choudhury et.al, 17, ICRA, Pan et.al, 17, where optimal controllers were used as oracles), but also in natural language processing, where the ground truth label information can be leveraged to construct experts (Chang et al, 15, ICML, Sun et al, 17 ICML). In these applications, we can not guarantee the constructed “expert” is globally optimal, hence we believe our work can be directly applied and result in improved performance. Experiments like these are left to future work.\n\nR: Compare to Supervised Training: \n\nA: We thank the reviewer for this suggestion, we are working on this and will include a comparison to a simple supervised learning approach, although previous work has demonstrated that supervised learning won’t work as well as the interactive learning setting (Ross & Bagnell, 2011, AISTATS) in both theory and in practice. \n\nR: “how many demonstrations:”\n\nA: We used a number of demonstration trajectories ranging from 10 to 100 (almost the same as the number of trajectories we use in each batch during learning), depending on the task. For higher-dimensional tasks, a larger number of state-action pairs collected from demonstrations are needed. This is simply due to the fact that the feature space has a higher dimension. \n\nR: “Regarding to previous work on Imitation learning and inverse RL”\n\nA: We thank the reviewer for pointing out this paper. This paper is related and this paper also shows the advantage of using a truncated horizon (e.g., less computational complexity), although the context is different: interpolating between behaviour cloning and inverse RL (or “intention learning,” in the paper’s words) by learning a reward functions and then planning with the known dynamics. Our work interpolates between a different imitation learning approach---interactive IL and RL. We think this work and our work together show the advantage of using a truncated horizon: less computational complexity (what this paper showed), and better performance than an imperfect expert (what we showed in our work). \n" ]
[ 7, 6, 3, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryUlhzWCZ", "iclr_2018_ryUlhzWCZ", "iclr_2018_ryUlhzWCZ", "H1yFfkuXf", "iclr_2018_ryUlhzWCZ", "H16Rrvtlz", "H1JzYwcxM", "BJBWMqqlf" ]
iclr_2018_SJJinbWRZ
Model-Ensemble Trust-Region Policy Optimization
Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks.
accepted-poster-papers
The reviewers agree that the paper presents nice results on model based RL with an ensemble of models. The limited novelty of the methods is questioned by one reviewer and briefly by the others, but they all agree that this paper's results justify its acceptance.
train
[ "rkoFpFOlz", "SJ3tICFlz", "Hkg9Vrqlz", "S1d6-e6XG", "ByM_IJ0Wz", "rJ5NoSnbG", "S1ljgW0gf", "HJdkWGt1f", "S1vW09QlG", "B1RjDqMlf", "ryRzHczeM", "rJBiAuGxG", "BJLAKoxeG", "HkWtjGlxG", "rkiav21gf", "SJK0YrJlz", "S16EX33yG", "SJY0Oo5yM", "SytosZKJM", "rkm6BWFkf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "public", "author", "author", "public", "author", "public", "public", "author", "public", "author", "public", "author", "public" ]
[ "Summary:\nThe paper proposes to use ensembles of models to overcome a typical problem when training on a learned model: That the policy learns to take advantage of errors of the model.\nThe models use the same training data but are differentiated by a differente parameter initialization and by training on differently drawn minibatches.\nTo train the policy, at each step the next state is taken from an uniformly randomly drawn model.\nFor validation the policy is evaluated on all models and training is stopped early if it doesn't improve on enough of them. \n\nWhile the idea to use an ensemble of deep neural networks to estimate their uncertainty is not new, I haven't seen it yet in this context. They successfully show in their experiments that typical levels of performance can be achieved using much less samples from the real environment.\n\nThe reduction in required samples is over an order of magnitude for simple environments (Mujoco Swimmer). However, (as expected for model based algorithms) both the performance as well as the reduction in sample complexity gets worse with increasing complexity of the environment. It can still successfully tackle the Humanoid Mujoco task but my guess is that that is close to the upper limit of this algorithm?\n\nOverall the paper is a solid and useful contribution to the field.\n\n*Quality:*\nThe paper is clearly shows the advantage of the proposed method in the experimental section where it compares to several baselines (and not only one, thank you for that!). \n\nThings which in my opinion aren't absolutely required in the paper but I would find interesting and useful (e.g. in the appendix) are:\n1. How does the runtime (e.g. number of total samples drawn from both the models and the real environment, including for validation purpuses) compare?\nFrom the experiments I would guess that MB-TRPO is about two to three orders of magnitude slower, but having this information would be useful.\n2. For more complex environments it seems that training is becoming less stable and performance degradates, especially for the Humanoid environment. A plot like in figure 4 (different number of models) for the humanoid environment could be interesting? Additionally maybe a short discussion where the major problem for further scaling lies? For example: Expressiveness of the models? Required number of models / computation feasibility? Etc... This is not necessarily required for the paper but would be interesting.\n\n*Originality & Significance:*\nAs far as I can tell, none of the fundamental ideas are new. However, they are combined in an interesting, novel way that shows significant performance improvements.\nThe problem the authors tackle, namely learning a deep neural network model for model based RL, is important and relevant. As such, the paper contributes to the field and should be accepted.\n\n*Smaller questions and notes:*\n- Longer training times for MB-TRPO, in particular for Ant and Humanoid would have been intersting if computationionally feasibly.\n- Could this in principle be used with Q-learning as well (instead of TRPO) if the action space is discrete? Or is there an obvious reason why not that I am missing?", "The authors combine an ensemble of DNNs as model for the dynamics with TRPO. The ensemble is used in two steps:\nFirst to collect imaginary roll-outs for TRPO and secondly to estimate convergence of the algorithm. The experiments indicate superior performance over the baselines.\n\nThe paper is well-written and the experiments indicate good results. However, idea of using ensembles in the context of \n(model-based) RL is not novel, and it comes at the cost of time complexity. Therefore, the method should utilize \nthe advantage an ensemble provides to its full extent. \nThe main strength of an ensemble is to provide lower test error, but also some from of uncertainty estimate given by the spread of the predictions. The authors mainly utilize the first, but to a lesser extent the second advantage (the imaginary roll-outs will utilize the spread to generate possible outcomes). Ideally the exploration should also be guided by the uncertainty (such as VIME).\n\nRelated, what where the arguments in favor of an ensemble compared to Bayesian neural networks (possibly even as simple as using MH-dropout)? BNNs provide a stronger theoretical justification that the predictive uncertainty is meaningful.\n\nCan the authors comment on the time-complexity of the proposed methods compared to the baselines? In Fig. 2 the x-axis is the time step of the real data. But I assume it took a different amount of time for each method to reach step t. The same argument can be made for Fig. 4. It seems here that in snake the larger ensembles reach convergence the quickest, but I expect this effect to be reversed when considering actual training time.\n\nIn total I think this paper can provide a useful addition to the literature. However, the proposed approach does not have strong novelty and I am not fully convinced if the additional burden on time complexity outweighs the improved performance.\n\nMinor: In Sec. 2: \"Both of these approaches assume a fixed dataset of samples which are collected\nbefore the algorithm starts operating.\" This is incorrect, while these methods consider the domain of fixed datasets, the algorithms themselves are not limited to this context.\n", "This paper presents a simple model-based RL approach, and shows that with a few small tweaks to more \"typical\" model-based procedures, the methods can substantially outperform model-free methods on continuous control tasks. In particular, the authors show that by 1) using an ensemble of models instead of a single models, 2) using TRPO to optimize the policy based upon these models (rather that analytical gradients), and 3) using the model ensemble to validate when to stop policy optimization, then a simple model-based approach actually can outperform model-free methods.\n\nOverall, I think this is a nice paper, and worth accepting. There is very little actually new here, of course: the actual model-based method is entirely standard except with the additions above (which are also all fairly standard approaches in isolation). But at a higher level, the fact that such simple model-based approaches work better than somewhat complex model free approaches actually is the point of the paper to me. While the general theme of model-based RL outperforming model-free RL is not new (Atkeson and Santamaria (1997) comes to a similar conclusion) its good to see this same pattern demonstrated \"officially\" on modern RL benchmarks, especially since the _completely_ naive strategy of using a single model and more standard policy optimization doesn't perform as well.\n\nNaturally, there is some question as to whether the work here is novel enough to warrant publication, but I think the overall message of the paper is strong enough to overcome fairly minimal contribution from an algorithmic perspective. I did also have a few general concerns that I think could be discussed with a bit more detail in the paper:\n1) The choice of this particular model ensemble to represent uncertainty seems rather ad-how. Why is it sufficient to simply learn N models with different initial weights? It seems that the likely cause for this is that the random initial weights may lead to very different behavior in the unobserved parts of the space (i.e., portions of the state space where we have no samples), and thus. But it seems like there are much more principled ways of overcoming this same problem, e.g. by using an actual Bayesian neural net, directly modeling uncertainty in the forward model, or using generative model approaches. There's some discussion of this point in the introduction, but I think a bit more explanation about why the model ensemble is expected to work well for this purpose.\n2) Likewise, the fact the TRPO outperforms more standard gradient methods is somewhat surprising. How is the model ensemble being treated during BPTT? In the described TRPO method, the authors use a different model at each time step, sampling uniformly. But it seems like a single model is used for each rollout in the proposed BPTT method? If so, it's not surprising that this approach performs worse. But it seems like one could backprop through the different per-timestep models just as easily, and it would remove one additional source of difference between the two settings.", "We would like to thank all the reviewers for your comments. We really appreciate your feedback and we address your concerns below.\n \n1. Our approach can be used with Bayesian neural networks or dropouts. In our experiments, we decided to use an ensemble of neural networks to model the dynamics because of its simplicity. We noticed that using different initializations and different sequences of training batches is enough to approximate the posterior distribution. This finding has also been shown in the work by Osband et. al., in which the best performance is attained by simply relying on different initializations for the ensemble of Q-functions, not requiring the sampling-with-replacement process that is prescribed by bootstrapping.\n\n2. We provide the comparisons between BPTT and TRPO when a single model is employed. Thus, the only difference is the choice of policy optimizer. We made this point more explicit in the revised version. \n\n3. We agree that the use of model ensembles can be further utilized for exploration. In this paper, our contribution makes the first step in this direction - showing that MB RL with model ensembles is competitive with model-free methods on state-of-the-art RL benchmarks, and can be used with expressive neural network models.\n\n4. Even though it is true that, in simulation environments, the real-time complexity could be longer than model-free methods, the ultimate goal of model-based RL is to be able to use reinforcement learning in real-world robots, where the data collection is the bottleneck. Model ensembles can also be trained in a parallelizable way, which makes the training speed comparable to that of a single model.\n\n5. To answer the question of further scaling, a more elaborate benchmark is required to fully understand the benefits and challenges of MB vs. MF, and this is something we are considering for future work.\n\n6. Our method can be used with Q-learning for problems where a value function approach is desired. We have not tried this yet.\n\n7. In the revised version, we have included longer training times for Ant and Humanoid, and fixed the reference to Depeweg et al. and Mishra et al. We also include the real-time complexity of our algorithms in the appendix.\n \n \nReferences:\nOsband, I., Blundell, C., Pritzel, A., and Van Roy, B., 2016. Deep exploration via bootstrapped DQN. In Advances in Neural Information Processing Systems (pp. 4026-4034).\n \nDepeweg, S., Hernández-Lobato, J.M., Doshi-Velez, F. and Udluft, S., 2016. Learning and policy search in stochastic dynamical systems with Bayesian neural networks. arXiv preprint arXiv:1605.07127.\n \nMishra, N., Abbeel, P. and Mordatch, I., 2017. Prediction and Control with Temporal Segment Models. arXiv preprint arXiv:1703.04070.\n\n", "Thank you for your comment. We decided to work with horizon 100 mainly because of two reasons. First, we think that it is sufficient to demonstrate meaningful behaviors. Second, they are easier to compare the results in both BPTT and TRPO cases. Regarding the soft-constraint videos, we see similar behaviors in the videos of our methods and those of TRPO and PPO on the same setting (at the end it tries to dive to maximize the reward and curl up to make the center of mass even forward). ", "Thanks for the interesting paper! However I have some questions about the choice of horizons. From your video, I notice for the hopper, it quickly falls over. I suspect it is because of your choice of short horizon (100 time step)? While I understand long time horizon would be difficult to optimize using BPTT, I assume it wouldn't be a problem for model free method like TRPO? And the changes to the original environment (using soft constraints instead of early termination) make it hard to compare against previous results (in terms of the video of performance, not pure score) like TRPO and PPO.", "I am wondering how long your method needs to run when using 10 dynamics in some experiments such as Swimmer-v1 and Humanoid-v1? \nCould you please tell us and also provide a description of your experiment facilities. \n\n", "Thank you for your explanation. ", "Thank you for your response.\n\nWith the goal of keeping the paper at an 8-page length, we only provided a limited literature survey, comparing to what we thought to be the most related recent work on model based RL. We would be happy to include a comprehensive section on related work in the final version, including a comparison to DAgger and other SysId methods. \n\nWe are also starting to investigate a real robot implementation of our ideas. Naturally, there are many challenges in getting model based RL to work on real hardware, and we intend this to be part of a different publication. Because of the hardware difficulty, a lot of recent work have also tested their algorithms on challenging OpenAI benchmark and open source their code. We found this to be useful to quickly and fairly compare our algorithm to other state-of-the-art methods.\n\nA difficulty in model learning in model-based setting is the coupling between the model and the policy, i.e., in order to learn a good model we need to learn a good policy and vice versa. In the paper we show that using a single model doesn't allow us to learn a good policy due to overfitting. As a result, the model cannot provide accurate enough predictions for optimizing the policy as shown - see the learning curve in figure 4. We attach the videos showing the model prediction vs the real environment in the case of a single model below. https://drive.google.com/open?id=1FzHQgosQNfbHsKXVewYrhuLrOq8jVEqH\n\nAs commented in a different thread, we intend to open source our code, with the hope that other researcher can try our method on various other problem domains.", "We feed both s and a into the input. They don't need to have the same dimension.", "Thanks, hmmm, I know it is a MLP but how do you deal with s and a since they have different dimensions. ", "D contains all the data collected so far. The model is a feed-forward neural network with two hidden layers.", "Thanks, I am wondering dataset D consist of only on-policy data or all previous collected data, could you clarify this? Another question is what is the structure of f(s, a) is, you only mentioned it has hidden sizes 1024-1024 and ReLU nonlinearities. ", "Thanks for the response. Your point about model expressiveness is well taken. However, the justifications still seem inadequate primarily because the chosen tasks and results do not adequately represent your premise.\n\nOn the question of physics based models vs DNN models -- this is still wide open. While DNN models are more expressive they might require orders of magnitude more samples to train. In addition, ability to generalize to states that have not been sufficiently visited is also hard to reason -- it is precisely for this reason that DAGGER and related approaches aggregate the data sets and slowly update the policy. Physics based models suffer less from this issue due to having the right priors. Thus, while it might be possible that in the large sample case expressive models might win owing to their capacity, these are not the regimes typical robotics problems operate in. Of course, if actual hardware results were presented, your proposed motivations and premise would have been well justified, but this is not the case.\n\nOn the difficulty on model learning -- it is not clear if there have been negative results in recent literature. The proposed way to learn the model is not very different from the vanilla approach of DAGGER for System ID -- are there any particular differences from simply aggregating the data sets and minimizing L2 loss? My guess is that not many people actually tried or implemented this correctly. An analysis of how accurate the learned model is would also be very revealing. For example, predict next state using learned model and visualize this in the simulator -- do we see smooth transitions or are the state transitions physically implausible? I could imagine that the learned models are not very accurate for prediction, and hence was not pursued rigorously. However, the models might be sufficient for policy improvement -- if this is the case, it might be an interesting insight.\n\nI should emphasize that my intention is not to be overly critical of the work. Model based RL is indeed a promising approach and your paper is one of the first to actually show good positive results with it in recent times. However, my concern is that many related works have been ignored, and the method is somewhat oversold. This is of course not entirely negative -- if a simple method works well but has been ignored by the community, it is worth pointing this out.", "Thank you for your questions. The standard deviation of parameter noise is proportional to the difference between the current parameters and the previous ones. We use 3.0 for the proportional ratio.\nThe dataset D consists of both training and validation sets. After new data are collected, we split them and put them into each set. We plan to release the codebase in the future.", "I am trying to replicate your results, it is unclear to me what exactly the value of standard derivation for perturbing policy parameters is(see A.1.1DATA COLLECTION), there only states that it is proportional to the absolute difference. Another question in A.1.1 is that 'we split the collected data using a 2-to-1 ratio for training and validation datasets' while in algorithm.2 it seems that you collect all previous data in D and use it to train, do you split current collected data or dataset D. Could you please help clarify this? BTW, would you like to open source codebase in the future?", "Thank you for your question. Here is the explanation.\n\nThe use of model uncertainty to deal with the discrepancy between the model and real world dynamics is a mature idea that dates back to much earlier than the EPOpt paper. Robust MDPs [1,2], ensemble methods [3], Bayesian approaches such as PILCO [4], among others, all share the same idea - use data to estimate some form of model uncertainty, and find a policy that works well against all models in the uncertainty set, with the hope that this policy will also work well in the real world model. \n\nThe differences between these works is in the models that they learn - discrete models in [1,2], Gaussian processes in [4], and physical models with a small set of parameters in [3] and EPOpt. \n\nTo date, learning dynamics models with deep neural networks for non-trivial tasks has been notoriously hard. For example, in [5], Nagabandi et al. proposed to use model-free fine tuning after model-based training, and in [6] Gal et al. showed primary results on cartpole.\n\nThe promise in using neural network models is their expressiveness -- which can scale up to complex dynamics for which Gaussian processes are not applicable, while writing down an analytical physics model is too challenging. This would be the case, for example, in a real-world robot that needs to handle deformable objects. For such domains, writing down a parametric physical model, as was done in EPOpt, can be problematic. \n\nSo, while the main difference with prior work on model uncertainty is in our DNN model, our contribution is in showing that, for the first time, such expressive models can be used to solve challenging control tasks. This should not be waived off as a minor difference, as getting DNNs to learn useful dynamics models has been the focus of many recent studies [5,6,7].\n\nIntuitively, the challenge in model based RL with DNNs is that as the models become more expressive, the harder it becomes to control generalization errors, which the policy optimization tends to exploit (thus leading to a failure in the real-world execution). To put things in numbers, our DNN models have thousands of tunable parameters. The models in the EPOpt paper had at most 4.\n\n[1] Bagnell, J.A., Ng, A.Y. and Schneider, J.G., 2001. Solving uncertain Markov decision processes.\n[2] Nilim, A. and El Ghaoui, L., 2005. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5), pp.780-798.\n[3] Mordatch, I., Lowrey, K. and Todorov, E., 2015, September. Ensemble-CIO: Full-body dynamic motion planning that transfers to physical humanoids. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on(pp. 5307-5314). IEEE.\n[4] Deisenroth, M. and Rasmussen, C.E., 2011. PILCO: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11) (pp. 465-472).\n[5] Nagabandi, A., Kahn, G., Fearing, R.S. and Levine, S., 2017. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. arXiv preprint arXiv:1708.02596.\n[6] Gal, Y., McAllister, R.T. and Rasmussen, C.E., 2016, April. Improving PILCO with bayesian neural network dynamics models. In Data-Efficient Machine Learning workshop (Vol. 951, p. 2016).\n[7] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T. and Tassa, Y., 2015. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (pp. 2944-2952).\n", "This paper from last year's ICLR also considers an ensemble of models and uses TRPO to find a robust policy with reduced sample complexity. https://arxiv.org/abs/1610.01283\n\nCan you comment in the connections? It seems very relevant, and this line of work should be cited and discussed in the paper. Domain randomization based approaches also fall under this bucket. The only difference I see is that EPOpt uses a physics based model representation whereas DNN models are used here. However, this difference is extremely minor, since the way the model is updated is identical in both -- gradient descent or MAP (in Bayesian case). Is the method proposed here simply EPOpt with DNN function approximator for the model?", "Thank you for your comment. Since we do not care about the fictitious sample complexity, we do not find that PPO consistently improves the real sample complexity. We also noticed that the hyperparameters in PPO are more tricky to tune at least in the model-based setting, whereas TRPO can work well out-of-the-box.", "I was wondering what is the difference in results between using TRPO and PPO for policy learning. PPO seems to be more stable and sample efficient than TRPO. " ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJJinbWRZ", "iclr_2018_SJJinbWRZ", "iclr_2018_SJJinbWRZ", "iclr_2018_SJJinbWRZ", "rJ5NoSnbG", "iclr_2018_SJJinbWRZ", "rkiav21gf", "SytosZKJM", "HkWtjGlxG", "ryRzHczeM", "rJBiAuGxG", "BJLAKoxeG", "rkiav21gf", "S16EX33yG", "SJK0YrJlz", "SytosZKJM", "SJY0Oo5yM", "iclr_2018_SJJinbWRZ", "rkm6BWFkf", "iclr_2018_SJJinbWRZ" ]
iclr_2018_Hy6GHpkCW
A Neural Representation of Sketch Drawings
We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects. The model is trained on a dataset of human-drawn images representing many different classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.
accepted-poster-papers
This work presents a RNN tailored to generate sketch drawings. The model has novel elements and advances specific to the considered task, and allows for free generation as well as generation with (partial) input. The results are very satisfactory. Importantly, as part of this work a large dataset of sketch drawings is released. The only negative aspect is the insufficient evaluation, as pointed out by R1 who points out the need for baselines and evaluation metrics. R1’s concerns have been acknowledged by the authors but not really addressed in the revision. Still, this is a very interesting contribution.
test
[ "rJLTQtKgG", "SyW65dqgz", "B1wqtjoxz", "B1C-ibcXf", "SytFB9hGG", "S1TvBchzz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper introduces a neural network architecture for generating sketch drawings. The authors propose that this is particularly interesting over generating pixel data as it emphasises more human concepts. I agree. The contribution of this paper of this paper is two-fold. Firstly, the paper introduces a large sketch dataset that future papers can rely on. Secondly, the paper introduces the model for generating sketch drawings.\n\nThe model is inspired by the variational autoencoder. However, the proposed method departs from the theory that justifies the variational autoencoder. I believe the following things would be interesting points to discuss / follow up:\n- The paper preliminarily investigates the influence of the KL regularisation term on a validation data likelihood. It seems to have a negative impact for the range of values that are discussed. However, I would expect there to be an optimum. Does the KL term help prevent overfitting at some stage? Answering this question may help understand what influence variational inference has on this model.\n- The decoder model has randomness injected in it at every stage of the RNN. Because of this, the latent state actually encodes a distribution over drawings, rather than a single drawing. It seems plausible that this is one of the reasons that the model cannot obtain a high likelihood with a high KL regularisation term. Would it help to rephrase the model to make the mapping from latent representation to drawing more deterministic? This definitely would bring it closer to the way the VAE was originally introduced.\n- The unconditional generative model *only* relies on the \"injected randomness\" for generating drawings, as the initial state is initialised to 0. This also is not in the spirit of the original VAE, where unconditional generation involves sampling from the prior over the latent space.\n\nI believe the design choices made by the authors to be valid in order to get things to work. But it would be interesting to see why a more straightforward application of theory perhaps *doesn't* work as well (or whether it works better). This would help interesting applications inform what is wrong with current theoretical views.\n\nOverall, I would argue that this paper is a clear accept.", "The paper aims tackles the problem of generate vectorized sketch drawings by using a RNN-variational autoencoder. Each node is represented with (dx, dy) along with one-hot representation of three different drawing status. A bi-directional LSTM is used to encode latent space in the training stage. Auto-regressive VAE is used for decoding. \n\nSimilar to standard VAEs, log-likelihood has bee used as the data-term and the KL divergence between latent space and Gaussian prior is the regularisation term. \n\nPros:\n- Good solution to an interesting problem. \n- Very interesting dataset to be released.\n- Intensive experiments to validate the performance. \n\nCons:\n- I am wondering whether the dataset contains biases regarding (dx, dy). In the data collection stage, how were the points lists generated from pen strokes? Did each points are sampled from same travelling distance or according to the same time interval? Are there any other potential biases brought because the data collection tools?\n- Is log-likelihood a good loss here? Think about the case where the sketch is exactly the same but just more points are densely sampled along the pen stroke. How do you deal with this case?\n- Does the dataset contain more meta-info that could be used for other tasks beyond generation, e.g. segmentation, classification, identification, etc.? ", "The paper presents both a novel large dataset of sketches and a new rnn architecture to generate new sketches.\n\n+ new and large dataset\n+ novel algorithm\n+ well written\n- no evaluation of dataset\n- virtually no evaluation of algorithm\n- no baselines or comparison\n\nThe paper is well written, and easy to follow. The presented algorithm sketch-rnn seems novel and significantly different from prior work.\nIn addition, the authors collected the largest sketch dataset, I know of. This is exciting as it could significantly push the state of the art in sketch understanding and generation. \n\nUnfortunately the evaluation falls short. If the authors were to push for their novel algorithm, I'd have expected them to compare to prior state of the art on standard metrics, ablate their algorithm to show that each component is needed, and show where their algorithm shines and where it falls short.\nFor ablation, the bare minimum includes: removing the forward and/or reverse encoder and seeing performance drop. Remove the variational component, and phrasing it simply as an auto-encoder. Table 1 is good, but not sufficient. Training loss alone likely does not capture the quality of a sketch.\nA comparison the Graves 2013 is absolutely required, more comparisons are desired.\nFinally, it would be nice to see where the algorithm falls short, and where there is room for improvement.\n\nIf the authors wish to push their dataset, it would help to first evaluate the quality of the dataset. For example, how well do humans classify these sketches? How diverse are the sketches? Are there any obvious modes? Does the discretization into strokes matter?\nAdditionally, the authors should present a few standard evaluation metrics they would like to compare algorithms on? Are there any good automated metrics, and how well do they correspond to human judgement?\n\nIn summary, I'm both excited about the dataset and new architecture, but at the same time the authors missed a huge opportunity by not establishing proper baselines, evaluating their algorithm, and pushing for a standardized evaluation protocol for their dataset. I recommend the authors to decide if they want to present a new algorithm, or a new dataset and focus on a proper evaluation.", "We agree with the reviewer that we try to do many things in this paper - introduce a method for generating vector images and introducing a large dataset of vector drawings, and entered a less-explored area with few established evaluation metrics.\n\nAs the dataset, and area of vector image modelling is new, our architecture was designed with simplicity in mind to become a baseline for future work. You mentioned Graves 2013, a work that we actually based our method on. Specifically: once we take away the encoder, and generate images unconditionally using the decoder model, it is identical to the autoregressive modelling approach taken in Graves 2013. The only minor difference is we needed to model the \"end of drawing\" probability and have appended the model to output that as well. You are right to mention that we should compare our encoder to a forward-only RNN, to see the metrics drop, although in practice we would argue that most practitioners would choose the bi-directional method from the onset especially when the length of the data becomes longer, and the architecture and task makes it possible to use a non-causal model. For example, while Graves 2013 uses a unidirectional LSTM for decoder-only handwriting generation, Graves 2007 [1] uses a bidirectional LSTM for handwriting classification without comparing to a unidirectional one.\n\nYou make some great points about the metrics relating to the dataset. We are particularly interested in how the diversity and multi-modality of these drawings relate to issues like novelty and interpretability. In our view, these human perception and generation issues are complex and important enough to warrant their own future paper(s). To make it more convenient for future work on this dataset, we have also standardized the format and limited each class to have exactly 70K samples, 2.5K validation and test samples, rather than using the full extent of the dataset. We hope this standardisation will encourage future experiments in not just generation, but also classification, and also examination into cultural biases, diversity, modes, and human performance.\n\n[1] A. Graves, S. Fernández, M. Liwicki, H. Bunke and J. Schmidhuber. Unconstrained online handwriting recognition with recurrent neural networks. NIPS 2007, Vancouver, Canada. \n\nhttps://papers.nips.cc/paper/3213-unconstrained-on-line-handwriting-recognition-with-recurrent-neural-networks\n", "It is true that our design choices were made to get things to work, and despite this, the current model still has many issues that can be improved upon in the future. For example, the model does not perform well for long sequence length. We needed to use the Ramer–Douglas–Peucker (RDP) algorithm to simplify the strokes, which also made the data more consistent for the RNN. We have included these details and tried to put information about model limitations in the A1 Dataset Details section.\n\nWith your feedback, and also along with the feedback from AnonReviewer3, we have added a short section in A6 that examines the tradeoff between likelihood and KL. We examine what happens qualitatively to the sketches as we vary the weighting on the KL term. Hopefully this will be a good starting point for future work.\n\nIn future work we will explore in depth the regularisation methodology - perhaps KL is not the best one to use and we wish to explore alternative approaches, for example alternatives outlined in [1].\n\n[1] InfoVAE: Information Maximizing Variational Autoencoders (https://arxiv.org/abs/1706.02262).", "Regarding the data collection, we have used the Ramer–Douglas–Peucker (RDP) algorithm as a pre-processing step to simplify the strokes in the dataset. Using RDP, line strokes drawn very slowly (with many points) and drawn very swiftly with look similar after the simplification process. For example, if the user holds his or her finger on the screen in one location for many seconds while sketching something, many points will be generated at a single location, but the simplification method will collapse those points as a single point. We put details of the data collection and stroke simplification in A1. Dataset Details.\n\nThe dataset will contain meta-info, such as country information, timestamp, and class, so we hope it can be used for classification experiments, and even for exploring cultural biases in the way we draw.\n\nYou raise an interesting point about whether log-likelihood is a good loss, especially in the case \"where the sketch is exactly the same but just more points are densely sampled along the pen stroke\". Based on your feedback, and also the feedback of AnonReviewer2, we have added a section in A6 \"Which Loss Controls Image Coherency?\", where we look at whether the KL loss term helps in such cases.\n\nWe explore the tradeoff between varying weights of the KL loss term and see that increasing the KL weighting produces qualitatively better reconstructions, despite having a lower log-likelihood loss number. We will investigate alternative loss formulations in future work, perhaps looking at adversarial methods, but we hope this will be a good start in that direction." ]
[ 8, 8, 5, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hy6GHpkCW", "iclr_2018_Hy6GHpkCW", "iclr_2018_Hy6GHpkCW", "B1wqtjoxz", "rJLTQtKgG", "SyW65dqgz" ]
iclr_2018_SJaP_-xAb
Deep Learning with Logged Bandit Feedback
We propose a new output layer for deep neural networks that permits the use of logged contextual bandit feedback for training. Such contextual bandit feedback can be available in huge quantities (e.g., logs of search engines, recommender systems) at little cost, opening up a path for training deep networks on orders of magnitude more data. To this effect, we propose a Counterfactual Risk Minimization (CRM) approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows Stochastic Gradient Descent (SGD) training. We empirically demonstrate the effectiveness of the method by showing how deep networks -- ResNets in particular -- can be trained for object recognition without conventionally labeled images.
accepted-poster-papers
In this paper the authors show how to allow deep neural network training on logged contextual bandit feedback. The newly introduced framework comprises a new kind of output layer and an associated training procedure. This is a solid piece of work and a significant contribution to the literature, opening up the way for applications of deep neural networks when losses based on manual feedback and labels is not possible.
test
[ "HkGPbhPgG", "rk5ybxtxf", "SJVVwfoeM", "rJYagDTXM", "B1fB6sofG", "rk8o2soGz", "HkJw2jjMM", "ryR-3ijMf", "rJWkU5g-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Learning better policies from logged bandit feedback is a very important problem, with wide applications in internet, e-commerce and anywhere it is possible to incorporate controlled exploration. The authors study the problem of learning the best policy from logged bandit data. While this is not a brand new problem, the important and relevant contribution that the authors make is to do this using policies that can be learnt via neural networks. The authors are motivated by two main applications: (i) multi-class classification problems with bandit feedback (ii) ad placements problem in the contextual bandit setting. \n\nThe main contributions of the authors is to design an output layer that allows training on logged bandit feedback data. Traditionally in the full feedback setting (setting where one gets to see the actual label and not just if our prediction is correct or incorrect) one uses cross-entropy loss function to optimize the parameters of a deep neural network. This does not work in a bandit setting, and previous work has developed various methods such as inverse-propensity scoring, self-normalized inverse propensity scoring, doubly robust estimators to handle the bandit setting. The authors in this paper work with self-normalized inverse propensity scoring as the technique to deal with bandit feedback data. the self normalized inverse propensity estimator (SNIPS) that the authors use is not a new estimator and has been previously studied in the work of Adith Swaminathan and co-authors. However, this estimator being a ratio is not an easy optimization problem to work with. The authors use a fairly standard reduction of converting ratio problems to a series of constrained optimization problems. This conversion of ratio problems to a series of constrained optimization problems is a standard textbook problem, and therefore not new. But, i like the authors handling of the constrained optimization problems via the use of Lagrangian constraints. It would have been great if the authors connected this to the REINFORCE algorithm of Williams. Unfortunately, the authors do not do a great job in establishing this connection, and I hope they do this in the full version of the paper. The experimental results are fairly convincing and i really do not have any major comments. Here are my \"minor\" comments.\n\n1. It would be great if the authors can establish connections to the REINFORCE algorithm in a more elaborate manner. It would be really instructive to the reader.\n\n2. On page 6, the authors talk about lowering the learning rate and the learning rate schedule. I am guessing this is because of the intrinsic high variance of the problem. It would be great if the authors can explain in more detail why they did so.", "In this paper, the authors propose a new output layer for deep networks allowing training on logged contextual bandit feedback. They propose a counterfactual risk minimization objective which makes the training procedure different from the one that uses conventional cross-entropy in supervised learning. The authors claim that this is the first attempt where Batch Learning from Bandit Feedback (BLBF) is performed using deep learning. \n\nThe authors demonstrate the derivation steps of their theoretical model and present 2 empirical results. The first result is on visual object classification using CIFAR-10. To simulate logged bandit feedback for CIFAR-10, the authors perform the standard supervised to bandit conversion using a hand-coded logging policy that achieves 49 % error rate on training data. Using the logged bandit feedback data for the proposed bandit model, the authors are able to achieve substantial improvement (13% error rate) and given more bandit feedback, the model is able to compete with the same architecture trained on the full-information using cross entropy (achieving 8.2% error rate).\n\nThe second result is a real-world verification of the proposed approach (as the logged feedback are real and not synthesized using a conversion approach) which is an advertisement placing task from Criteo’s display advertising system (Lefortier et al., 2016). The task consists of choosing the product to display in the ad in order to maximize the number of clicks. The proposed deep learning approach improve substantially on state-of-the-art. The authors show empirically that the proposed approach is able to have substantial gain compared to other methods. The analysis is done by performing ablation studies on context features which are not effective on linear models.\n\nThe paper is well written. The authors make sure to give the general idea of their approach and its motivation, detail the related work and position their proposed approach with respect to it. The authors also propose a new interpretation of the baseline in “REINFORCE-like” methods where it makes the counterfactual objective equivariant (besides the variance reduction role). The authors explain their choice of using importance sampling to estimate the counterfactual risk. They also detail the arguments for using the SNIPS estimator, the mathematical derivation for the training algorithm and finally present the empirical results.\n\nBesides the fact that empirical results are impressive, the presented approach allows to train use deep nets when manually labelling full-information feedback is not viable.\n\nIn section 4.2, the neural network that has been used is a 2 layer network with tanh activation. It is clear that the intention of the authors is to show that even with a simple neural architecture, the gain is substantial compared to the baseline method, which is indeed the right approach to go with. Still, it would have been of a great benefit if they had used a deeper architecture using ReLU-based activations.\n\n", "This paper proposes a new output layer in neural networks, which allows them to use logged contextual bandit feedback for training. The paper is well written and well structured. \n\n\nGeneral feedback:\n\nI would say the problem addressed concerns stochastic learning in general, not just SGD for training neural nets. And it's not a \"new output layer\", but just a softmax output layer (Eq. 1) with an IPS+baseline training objective (Eq. 16).\n\n\nOthers:\n\n- The baseline in REINFORCE (Williams'92), which is equivalent to introduced Lagrange multiplier, is well known and well defined as control variate in Monte Carlo simulation, certainly not an \"ad-hoc heuristic\" as claimed in the paper [see Greensmith et al. (2004). Variance Reduction for Gradient Estimates in Reinforcement Learning, JMLR 5.]\n\n- Bandit to supervised conversion: please add a supervised baseline system trained just on instances with top feedbacks -- this should be a much more interesting and relevant strong baseline. There are multiple indications that this bandit-to-supervised baseline is hard to outperform in a number of important applications.\n\n- The final objective IPS^lambda is identical to IPS with a translated loss and thus re-introduces problems of IPS in exactly the same form that the article claims to address, namely:\n * the estimate is not bounded by the range of delta\n * the importance sampling ratios can be large; samples with high such ratios lead to larger gradients thus dominating the updates. The control variate of the SNIPS objective can be seen as defining a probability distribution over the log, thus ensuring that for each sample that sample’s delta is multiplied by a value in [0,1] and not by a large importance sampling ratio.\n * IPS^lambda introduces a grid search which takes more time and the best value for lambda might not even be tested. How do you deal with it?\n\n- As author note, IPS^lambda is very similar to an RL-baseline, so results of using IPS with it should be reported as well:\n In more detail, Note:\n 1. IPS for losses<0 and risk minimization: raise the probability of every sample in the log irrespective of the loss itself\n 2. IPS for losses>0 and risk minimization: lower the same probability\n 3. IPS^lambda: by the translation of the loss, it divides the log into 2 groups: a group whose probabilities will be lowered and a group whose probabilities will be raised (and a third group for delta=lambda but the objective will be agnostic to these)\n 4. IPS with a baseline would do something similar but changes over time, which means the above groups are not fixed and might work better. Furthermore, there is no hyperparameter/grid search required for the simple RL-baseline\n -> results of using IPS with the RL-baseline should be reported for the BanditNet rows in Table 1 and in CIFAR-10 experiments.\n\n- What is the feedback in the CIFAR-10 experiments? Assuming it's from [0..1], and given the tested range of lambdas, you should run into the same problems with IPS and its degenerate solutions for lambdas >=1.0. In general, how are your methods behaving for lambda* (corresponding to S*) such that makes all difference (delta_i - lambda*) positive or negative?\n\n- The claim of Theorem 2 in appendix B does not follow from its proof: what is proven is that the value of S(w) lies in an interval [1-e..1+e] with a certain probability for all w. It says nothing about a solution of an optimization problem of the form f(w)/S(w) or its constrained version. Actually, the proof never makes any connection to optimization.\n\n- What the appendix C basically claims is that it's not possible to get an unbiased estimate of a gradient for a certain class of non-convex ratios with a finite-sum structure. This would contradict some previously established convergence results for this type of problems: Reddi et al. (2016) Stochastic Variance Reduction for Nonconvex Optimization, ICML and Wang et al. 2013. Variance Reduction for Stochastic Gradient Optimization, NIPS. On the other hand, there seem to be no need to prove such a claim in the first claim, since the difficulty of performing self-normalized IPS on GPU should be evident, if one remembers that the normalization should run over the whole logged dataset (while only the current mini-batch is accessible to the GPU).", "We thank everybody again for their useful suggestions and we uploaded a revision of the paper. The main changes in the revision are as follows:\n\n- We clarified the relation to the REINFORCE baseline as part of the related work in Section 2, a more detailed paragraph at the end of Section 3.3, and a discussion of using the expected loss as a heuristic as part of the empirical results in Section 4.\n- We fixed the statement of Theorem 2 in Appendix B and added some explanation about the intuition behind it.\n- The equivariance problem as it applies to loss translations is now more formally defined in Section 3.2. We also added an intuitive explanation of what equivariance means in this context already in the introduction of the paper.\n- Since the co-author who conducted the Criteo experiments became unresponsive since submission of the paper and was removed from the author list, the remaining authors felt uncomfortable including the results and vouching for their correctness. We therefore removed them from the paper, since the main points about training deep networks with logged bandit feedback are already made by the ResNet experiments. ", "Thank you for the comments. We will follow your suggestion and elaborate on the connection to REINFORCE in the on-policy setting as outlined in the other response. The reason why we reduced the learning rate is two-fold. First, we did observe convergence issues at the setting used for cross-entropy training, and agree hat this is probably due to increased variance. In addition, note that cross-entropy and our ERM objective are simply quite different and produce gradients at different scale. More generally, there is probably more room for improving convergence speed, like the use of alternative minibatch sizes, but this is besides the main point of this paper.", "Thank you for the comments and the suggestion. We are planning to work with Criteo on further improving the results, and we agree that other architectures may perform substantially better. However, the key point of this paper is exploring the properties of our approach, not necessarily squeezing the last bit of performance out of any particular dataset. Note that the CIFAR10 results using the ResNet architecture already demonstrate that training deep and complex models using our approach is possible.", "Thank you for the detailed comments that will help us further improve the paper. We agree that we should clarify the connection and similarities to the REINFORCE baseline, and we will point out in more detail how the baseline is different in the off-policy setting we consider. First, we cannot sample new roll-outs from the current policy under consideration, which means we cannot use the standard variance-optimal baseline estimator used in REINFORCE. Second, we tried using the (estimated) expected loss of the learnt policy as the baseline as is commonly done in REINFORCE. As Figure 1 shows, it is between 0.130 and 0.083 (i.e. 0/1 loss on the test set) for the best policies we found. Figure 2 (left) shows that these baseline values are well outside of the optimum range of about [0.75-1.0]. Finally, the right way to modify adaptive baseline estimation procedures from REINFORCE to the off-policy setting remains an open question, and it is unclear whether gradient variance (as opposed to variance of the ERM objective) is really the key issue in batch learning from bandit feedback.\n\nTo clarify your comment \"final objective IPS^lambda is identical to IPS with a translated loss and thus re-introduces problems of IPS\": Note that none of the individual IPS^lambda is used as an estimate, and the actual estimate we are optimizing in Equation (11) is bounded. While the SNIPS estimate can substantially reduce variance, large weights can certainly still be an issue that cannot be overcome without additional side information. And while the grid search increases training time, the empirical results (especially Figure 2) show that the prediction performance is not particularly sensitive to the exact value of lambda.\n\nRegarding your comment on \"degenerate solutions for lambdas >= 1\": The solutions are not necessarily degenerate, but they are suboptimal. This is shown in Figure 2.\n\nRegarding your comment on Theorem 2: You are absolutely right, and thank you for spotting this. We should be referring to the minimizer of the true risk in the statement of the theorem, not the minimizer of the empirical risk. What the theorem is supposed to say is: limiting the search to that range is unlikely to exclude the minimizer of the true risk. We will fix this in the final version.\n\nRegarding your comment on a \"baseline system trained just on instances with top feedbacks\": We are not sure what you mean by this baseline. One possible interpretation is: collect all unique contexts, and pair each context with action that has the highest observed reward in the logged dataset. Train on this manufactured dataset using supervised learning approaches. However, we generally don't assume that we repeatedly see that same context multiple times, so it is not clear that this is really a practical baseline.\n\nRegarding your comment on Appendix C: Our result does not contradict recent results on mini-batch optimization of finite sums of non-convex functions \\sum_i f_i. While the SNIPS objective can be written as a finite-sum of non-convex f_i, mini-batch {f_i} in these problems do not correspond to our mini-batches {delta_i, impwt_i}. Since GPU can only hold a mini-batch and the normalizer requires the entire dataset, we may be tempted to explore ways of estimating the normalizer sufficiently well using mini-batches. Appendix C shows that any such approach is always going to give a biased gradient, justifying the effort in developing the Lagrangian approach instead.", "Thank you for the comments and the suggestions regarding the presentation of the results. This is very helpful and we will work them in.\n\nRegarding your observation that the performance of BanditNet slightly dips below the supervised method, we were intrigued by this ourselves and have followed up on this. While the dip in Figure 1 is not significant, other experiments have shown that optimizing an ERM objective instead of cross-entropy does seem to produce a small but consistent advantage in prediction error. We are planning to explore this further, but this is outside the scope of this paper.\n\nThe ceiling on the Criteo data is not known. But note that the logging policy is an actual production policy that Criteo is using. However, one has to keep in mind that Criteo may not be optimizing clicks alone, but that they may also consider other business metrics.\n", "This is a good paper. What it shows fundamentally is how to learn a better policy given batch data acquired using an old policy. There are many applications for this in industry, but what people do usually is hand-craft a new policy that might be better, then do a real-world AB test. That is a slow and expensive process.\n \nThe big practical obstacle to using the method of this paper, or of related papers, is that current production systems usually haven’t recorded probabilities for actions taken by the old policy. More fundamentally, existing production policies are usually partly deterministic, so these probabilities don’t exist in a meaningful way.\n \nSections 1 to 3.1 are a good introduction and should be read even if one doesn’t have time for the later details. One quibble is that “equivariant” should be defined and discussed earlier than the first paragraph of 3.2.\n\nBeing more explicit about intuitions would be useful. Roughly, when an added constant shifts losses to be positive numbers, policies that put as little probability mass as possible on the observed actions have low risk estimates. If the constant shifts losses to the negative range, the opposite is the case. For either choice, the new policy eventually selected by the learning algorithm can be dominated by where the historical policy happens to sample data, not by which actions have low loss.\n \nA useful note in the paper (Eqn 1) is that for a neural net to define a probabilistic policy, all that is needed is probabilities at the output layer, such as with a softmax. More fancy Bayesian methods are not needed. This is a valuable simplifier in practice and in theory.\n \n3.3 should get to the point more quickly. The actual algorithm is simply to try Eqn 16 for several different values of lambda in a range below and above 1.0. How to then choose the best lambda could be explained more clearly. Also, how does the range change when the scale of delta changes?\n \nCIFAR-10 has 60,000 labeled images with ten classes, so one could say there are 600K binary labels available. Figure 1 shows that using 220K of these is enough. It is not clear whether the better accuracy with 250K is a genuine phenomenon; if so, it needs explanation and to continue the experiment above 250K.\n \nCriteo results are strong. Question: What is the ceiling on this dataset? How close does the new algorithm get to the best possible?" ]
[ 7, 8, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJaP_-xAb", "iclr_2018_SJaP_-xAb", "iclr_2018_SJaP_-xAb", "iclr_2018_SJaP_-xAb", "HkGPbhPgG", "rk5ybxtxf", "SJVVwfoeM", "rJWkU5g-z", "iclr_2018_SJaP_-xAb" ]
iclr_2018_Byt3oJ-0W
Learning Latent Permutations with Gumbel-Sinkhorn Networks
Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data. Learning in such models is difficult, however, because exact marginalization over these combinatorial objects is intractable. In response, this paper introduces a collection of new methods for end-to-end learning in such models that approximate discrete maximum-weight matching using the continuous Sinkhorn operator. Sinkhorn iteration is attractive because it functions as a simple, easy-to-implement analog of the softmax operator. With this, we can define the Gumbel-Sinkhorn method, an extension of the Gumbel-Softmax method (Jang et al. 2016, Maddison2016 et al. 2016) to distributions over latent matchings. We demonstrate the effectiveness of our method by outperforming competitive baselines on a range of qualitatively different tasks: sorting numbers, solving jigsaw puzzles, and identifying neural signals in worms.
accepted-poster-papers
This paper with the self-explanatory title was well received by the reviewers and, additionally, comes with available code. The paper builds on prior work (Sinkhorn operator) but shows additional, significant amount of work to enable its application and inference in neural networks. There were no major criticisms by the reviewers, other than obvious directions for improvement which should have been already incorporated in the paper, issues with clarity and a little more experimentation. To some extent, the authors addressed the issues in the revised version.
train
[ "HJRB8Nugf", "BkMR49YxM", "S1XwiJagG", "SyXM0-iXz", "Hka-6bjQG", "rJ3-L0tXM", "r1BqSAtmf", "BysZ6GgMG", "ryjE_x-bM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Quality: The paper is built on solid theoretical grounds and supplemented by experimental demonstrations. Specifically, the justification for using the Sinkhorn operator is given by theorem 1 with proof given in the appendix. Because the theoretical limit is unachievable, the authors propose to truncate the Sinkhorn operator at level $L$. The effect of approximation for the truncation level $L$ as well as the effect of temperature $\\tau$ are demonstrated nicely through figures 1 and 2(a). The paper also presents a nice probabilistic approach to permutation learning, where the doubly stochastic matrix arises from Gumbel matching distribution. \n\nClarity: The paper has a good flow, starting out with the theoretical foundation, description of how to construct the network, followed by the probabilistic formulation. However, I found some of the notation used to be a bit confusing.\n\n1. The notation $l$ appears in Section 2 to denote the number of iterations of Sinkhorn operator. In Section 3, the notation $l$ appears as $g_l$, where in this case, it refers to the layers in the neural network. This led me to believe that there is one Sinkhorn operator for each layer of neural network. But after reading the paper a few times, it seemed to me that the Sinkhorn operator is used only at the end, just before the final output step (the part where it says the truncation level was set to $L=20$ for all of the experiments confirmed this). If I'm correct in my understanding, perhaps different notation need to be used for the layers in the NN and the Sinkhorn operator. Additionally, it would have been nice to see a figure of the entire network architecture, at least for one of the applications considered in the paper. \n\n2. The distinction between $g$ and $g_l$ was also a bit unclear. Because the input to $M$ (and $S$) is a square matrix, the function $g$ seems to be carrying out the task of preparing the final output of the neural network into the input formate accepted by the Sinkhorn operator. However, $g$ is stated as \"the output of the computations involving $g_l$\". I found this statement to be a bit unclear and did not really describe what $g$ does; of course my understanding may be incorrect so a clarification on this statement would be helpful.\n\nOriginality: I think there is enough novelty to warrant publication. The paper does build on a set of previous works, in particular Sinkhorn operator, which achieves continuous relaxation for permutation valued variables. However, the paper proposes how this operator can be used with standard neural network architectures for learning permutation valued latent variable. The probabilistic approach also seems novel. The applications are interesting, in particular, it is always nice to see a machine learning method applied to a unique application; in this case from computational neuroscience.\n\nOther comments:\n\n1. What are the differences between this paper and the paper by Adams and Zemel (2011)? Adams and Zemel also seems to propose Sinkhorn operator for neural network. Although they focus only on the document ranking problem, it would be good to hear the authors' view on what differentiates their work from Adams and Zemel.\n\n2. As pointed out in the paper, there is a concurrent work: DeepPermNet. Few comments regarding the difference between their work and this work would also be helpful as well.\n\nSignificance: The Sinkhorn network proposed in the paper is useful as demonstrated in the experiments. The methodology appears to be straight forward to implement using the existing software libraries, which should help increase its usability. \n\nThe significance of the paper can greatly improve if the methodology is applied to other popular machine learning applications such as document ranking, image matching, DNA sequence alignment, and etc. I wonder how difficult it is to extend this methodology to bipartite matching problem with uneven number of objects in each partition, which is the case for document ranking. And for problems such as image matching (e.g., matching landmark points), where each point is associated with a feature (e.g., SIFT), how would one formulate such problem in this setting? \n", "Learning latent permutations or matchings is inherently difficult because the marginalization and partition function computation problems at its core are intractable. The authors propose a new method that approximates the discrete max-weight matching by a continuous Sinkhorn operator, which looks like an analog of softmax operator on matrices. They extend the Gumbel softmax method (Jang et al., Maddison et al. 2016) to define a Gumbel-Sinkhorn method for distributions over latent matchings. Their empirical study shows that this method outperforms competitive baselines for tasks such as sorting numbers, solving jigsaw puzzles etc.\n\nIn Theorem 1, the authors show that Sinkhorn operator solves a certain entropy-regularized problem over the Birkhoff polytope (doubly stochastic matrices). As the regularization parameter or temperature \\tau tends to zero, the continuous solution approaches the desired best matching or permutation. An immediate question is, can one show a convergence bound to determine a reasonable choice of \\tau?\n\nThe authors use the Gumbel trick that recasts a difficult sampling problem as an easier optimization problem. To get around non-differentiable re-parametrization under the Gumbel trick, they extend the Gumbel softmax distribution idea (Jang et al., Maddison et al. 2016) and consider Gumbel-Sinkhorn distributions. They illustrate that at low temperature \\tau, Gumbel-matching and Gumbel-Sinkhorn distributions are indistinguishable. This is still not sufficient as Gumbel-matching and Gumbel-Sinkhorn distributions have intractable densities. The authors address this with variational inference (Blei et al., 2017) as discussed in detail in Section 5.4.\n\nThe empirical results do well against competitive baselines. They significantly outperform Vinyals et al. 2015 by sorting up to N = 120 uniform random numbers in [0, 1] with great accuracy < 0.01, as opposed to Vinyals et al. who used a more complex recurrent neural network even for N = 15 and accuracy 0.9. \n\nThe empirical study on jigsaw puzzles over MNIST, Celeba, Imagenet gives good results on Kendall tau, l1 and l2 losses, is slightly better than Cruz et al. (arxiv 2017) for Kendall tau on Imagenet 3x3 but does not have a significant literature to compare against. I hope the other reviewers point out references that could make this comparison more complete and meaningful.\n\nThe third empirical study on the C. elegans neural inference problem shows significant improvement over Linderman et al. (arxiv 2017).\n\nOverall, I feel the main idea and the experiments (especially, the sorting and C. elegance neural inference) merit acceptance. I am not an expert in this line of research, so I hope other reviewers can more thoroughly examine the heuristics discussed by the authors in Section 5.4 and Appendix C.3 to get around the intractable sub-problems in their approach. ", "The idea on which the paper is based - that the limit of the entropic regularisation over Birkhoff polytope is on the vertices = permutation matrices -, and the link with optimal transport, is very interesting. The core of the paper, Section 3, is interesting and represents a valuable contribution.\n\nI am wondering whether the paper's approach and its Theorem 1 can be extended to other regularised versions of the optimal transport cost, such as this family (Tsallis) that generalises the entropic one:\n\nhttps://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14584/14420\n\nAlso, it would be good to keep in mind the actual proportion of errors that would make a random choice of a permutation matrix for your Jigsaws. When you look at your numbers, the expected proportion of parts wrong for a random assignment could be competitive with your results on the smallest puzzles (typically, 2x2). Perhaps you can put the *difference* between your result and the expected result of a random permutation; this will give a better understanding of what you gain from the non-informative baseline.\n(also, it would be good to define \"Prop. wrong\" and \"Prop. any wrong\". I think I got it but it is better to be written down)\n\nThere should also be better metrics for bigger jigsaws -- for example, I would accept bigger errors if pieces that are close in the solution tend also to be put close in the err'ed solution.\n\nTypos:\n\n* Rewrite definition 2 in appendix. Some notations do not really make sense.", "Dear community, we have uploaded a revised version of our manuscript. Our changes are summarized as follows:\n\n1) The reviewers provided very useful criticism, suggesting directions for improvement and detecting current minor weaknesses/inconsistencies. We have implemented their suggestions. Our responses to individual reviewers contain those changes. As a result, now we can present a stronger paper.\n2) We corrected other typos and/or notational inconsistencies.\n3) To improve the flow of the narrative, we changed the current exposition of the approximate posterior inference method, by moving it from the appendix B.2 and section 5.4 back to section 4. \n4) We released code for the Gumbel-Sinkhorn estimator, applied to the number sorting problem. This actual link will be included in the final version, to preserve anonymity.\n5) We expanded our Related Work section to discuss our work in the light of recent literature; notably a concurrent ICLR submission”Improving GANs Using Optimal Transport\n https://openreview.net/forum?id=rkQkBnJAb and Learning Generative Models with Sinkhorn Divergences https://arxiv.org/abs/1706.00292 \n6)We included a new Figure (new Figure 1), depicting the neural network architecture. Previous Figure 1 was moved to the appendix, as Figure 3\n", "We thank the reviewer for their thorough evaluation and thoughtful comments..\n\n1) Regarding Tsallis entropy, we recognize it is a quite interesting direction, as it allows to better understand our work in terms of information geometry (e.g. [1]). In a revised version we will include a commentary on how Theorem 1 may be interpreted as a way of performing marginal in exponential families (e.g. see [2]). With this, it will become clear using Tsallis entropy would yield yet another approximation. However, it is not clear that increases in computational complexity would justify the use of this type of entropy. See also response to AnonReviewer 3 for a complementary discussion.\n\n2) By “proportion wrong” we mean the proportion of wrongly identified pieces. By “proportion any wrong” we consider the proportion of cases (entire puzzles) where there was at least one mistake. The latter was used as a performance measure in [3], and it is much more stringent. \tWe will clarify this in the main text.\n \n3)Regarding the proportion wrong: the expected value of the proportion of errors in random guessing a permutation of n items can be shown to be (n-1)/(n). In 2x2 puzzles, this means we expect 75% of wrong pieces at random, but in practice no errors occur with our method (that is an easy case, though). We will comment on this baseline in a revised version\n\n4) We agree there might be better ways to measure error. For example, if we shift one row by one and put the last element at first, then the error on that row will be 100%. It will be also high according to the l1 and l2 norms. However, the solution still makes sense from the point of view of preserving locality constraints and e.g. still look good on vision tasks. Because of this we believe we should move towards using metrics that take into consideration the structure of local coherence between pieces. Unfortunately, that goes beyond the scope of this work.\n\n 5) We will correct the typo and minor inconsistencies in the main text and appendix.\n\nWe hope the reviewer will take the reviews with higher degree of confidence into consideration.\n\n[1]S.I. Amari. Information geometry and its applications http://www.springer.com/gp/book/9784431559771\n[2] Graphical Models, Exponential Families, and Variational Inference Martin J. Wainwright. and Michael I. Jordan. https://people.eecs.berkeley.edu/~wainwrig/Papers/WaiJor08_FTML.pdf\n[3] Order Matters: Sequence to sequence for sets. Oriol Vinyals, Samy Bengio, Manjunath Kudlur]https://arxiv.org/abs/1511.06391\n", "We thank the reviewer for the good evaluation of our paper, and the useful commentary.\n\n1) We unintentionally overloaded the $l$ index, and we will fix this final paper. The Sinkhorn operator is only applied to the output of the last layer of the neural network. We will include a new figure (in the appendix) depicting our architecture.\n\n2) We also agree the notation with $g$ and $g_l$ currently is odd. We will make an attempt to improve exposition in the main text, by defining g as the composition of g_l. The new figure that depicts the architecture should also help.\n\n3) Our work drew some inspiration from Adams and Zemel, but it has clear differences: i) we consider different tasks, beyond ranking; ii) we use a shared architecture to save parameters (permutation equivariance); and iii) in Adams and Zemel the Sinkhorn operator is used to heuristically approximate an expectation over permutations. Theorem 1 allows us to justify that approximation: by appealing to the framework of Variational Inference in exponential families [1] we can understand the Sinkhorn operator as an approximate marginal inference routine. We will briefly comment on that in the related work section and as a new last appendix (see also response to AnonReviewer2, point 1). Also, we will improve this section in order to make these distinctions more clear.\n\n4) DeepPermNet obtains results that are comparable to ours; however, our architecture is much simpler, as argued in the results section and appendix B.2.\n\n5) We agree that there are many ways in which this work could be extended, and we are actively investigating some of them. In cases of matchings between groups of different sizes, there is a simple extension: pad the cost matrix with zeros so that its row and column dimensions coincide. Also, regarding image matchings, they may be achieved by changing the architecture slightly: this is indeed explored in a simultaneous ICLR submission [2], dealing with generative models from an optimal transportation perspective. There, a network is trained to match samples from two datasets (the actual data and samples from the generative model) so that minimizes a total cost functional that is the distance on some learned embedding space.\n\n\n[1] Graphical Models, Exponential Families, and Variational Inference Martin J. Wainwright. and Michael I. Jordan. https://people.eecs.berkeley.edu/~wainwrig/Papers/WaiJor08_FTML.pdf\n[2]Improving GANs Using Optimal Transport. https://openreview.net/forum?id=rkQkBnJAb\n\n", "We thank the reviewer for their good evaluation of our paper, and the useful commentary.\n\n1) The question of convergence bounds is a quite relevant one, and we stress there is a double limit, involving tau and L. A rigorous analysis of such convergence goes beyond the scope of our work, but we point to recent convergence bounds results [1] in the more general entropy regularized OT problem, that in our case expresses in terms of optimization over the Birkhoff polytope. We believe research as in [1] is highly relevant to our work, as it suggests ways to obtain computational improvements by suitably tweaking the plain Sinkhorn iteration scheme. We plan to explore this research avenue in the future. For now, choice of tau is treated as a hyperparameter, we select it so that performance is optimal. This is discussed in the main text (section 3) and the appendix C.1\n\n2) We did not include more results related to jigsaw solving with neural networks as this problem is very recent in the context of neural networks. Nonetheless, we include a reference to another paper [2] that deals with jigsaw puzzles using neural networks, although comparisons are impossible since they work on a i) different dataset (Pascal VOC) and ii)their method does not scale with permutations, as it does not appeal to Sinkhorn operator but indexes each permutation as a separate entity, and limits the number of used permutations to at most 1000.\n\n\n[1]Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration Jason Altschuler∗ , Jonathan Weed∗ , and Philippe Rigollet. https://arxiv.org/pdf/1705.09634.pdf\n The concrete distribution. https://arxiv.org/abs/1611.00712\n[2]M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles.https://arxiv.org/abs/1603.09246 ", "Dear Patrick,\n\nThanks for the kind words about our work. We are thankful that you noticed our mistake. It has been amended in the revised version we are soon to submit. We also extrapolated your remark and checked for consistency in the remaining formulae at appendix B2.\nThe authors", "Thank you for this excellent work!\n\nIf I am not mistaken, I believe there is a small typo in Appendix B.2. The number of parameters of the simple network architecture should be \"n_u + N x n_u\", not \"N + N x n_u\". Since the hidden layer has n_u units, you need n_u parameters to map each number to an n_u-dimensional space. Then, it takes N x n_u parameters to produce each N-dimensional row of g(X; theta)? \n\n " ]
[ 8, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Byt3oJ-0W", "iclr_2018_Byt3oJ-0W", "iclr_2018_Byt3oJ-0W", "iclr_2018_Byt3oJ-0W", "S1XwiJagG", "HJRB8Nugf", "BkMR49YxM", "ryjE_x-bM", "iclr_2018_Byt3oJ-0W" ]
iclr_2018_rk07ZXZRb
Learning an Embedding Space for Transferable Robot Skills
We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.
accepted-poster-papers
This is a paper introducing a hierarchical RL method which incorporates the learning of a latent space, which enables the sharing of learned skills. The reviewers unanimously rate this as a good paper. They suggest that it can be further improved by demonstrating the effectiveness through more experiments, especially since this is a rather generic framework. To some extent, the authors have addressed this concern in the rebuttal.
test
[ "Hke9_IpVf", "H1kdvIp4M", "r1aiqauxf", "Hk2Ttk_NM", "Hk9a7-qlG", "HkEQMXAxz", "S1b5e76mM", "rJFSl767z", "Hy0-eQp7G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We would you like to notify the reviewer that the pdf has been updated with the requested changes including the new experiment with the embedding pre-trained on all 6 tasks.", "Dear reviewers, \nWe would like to let you know that we have updated the manuscript with the changes requested in your reviews. Thank you again for your feedback. ", "In this paper, (previous states, action) pairs and task ids are embedded into the same latent space with the goal of generalizing and sharing across skill variations. Once the embedding space is learned, policies can be modified by passing in sampled or learned embeddings.\n\nNovelty and Significance: To my knowledge, using a variational approach to embedding robot skills is novel. Significantly, the embedding is learned from off-policy trajectories, indicating feasibility on a real-world setting. The manipulation experiments show nice results on non-trivial tasks. However, no comparisons are shown against prior work in multitask or transfer learning. Additionally, the tasks used to train the embedding space were tailored exactly to the target task, making it unclear that this method will work generally.\n\nQuestions:\n- I am not sure how to interpret Figure 3. Do you use Bernoulli in the experiments?\n- How many task IDs are used for each experiment? 2?\n- Are the manipulation experiments learned with the off-policy variant?\n- Figure 4b needs the colors to be labeled. Video clips of the samples would be a plus.\n- (Major) For the experiments, only exactly the useful set of tasks is used to train the embedding. What happens if a single latent space is learned from all the tasks, and Spring-wall, L-wall, and Rail-push are each learned from the same embedding. \n\nI find the method to be theoretically interesting and valuable to the learning community. However, the experiments are not entirely convincing.\n", "I would really like to see an experiment where an embedding space is trained on a wider variety of tasks rather than just what is needed to generalize to the target task. However, I find that this paper is a valuable contribution to ICLR, and I think that it should be accepted.\n\nAs ICLR allows the authors to upload a new pdf, I do not understand why the author response only said they would make changes in the final version (especially for things like labeling a figure). ", "The submission tackles an important problem of learning and transferring multiple motor skills. The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space.\n\nThe exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results. I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T.\n\nThe submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input). I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not.\n\nOverall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, and I believe would be of interest to ICLR community.", "The paper presents a new approach for hierarchical reinforcement learning which aims at learning a versatile set of skills. The paper uses a variational bound for entropy regularized RL to learn a versatile latent space which represents the skill to execute. The variational bound is used to diversify the learned skills as well as to make the skills identifyable from their state trajectories. The algorithm is tested on a simple point mass task and on simulated robot manipulation tasks.\n\nThis is a very intersting paper which is also very well written. I like the presented approach of learning the skill embeddings using the variational lower bound. It represents one of the most principled approches for hierarchical RL. \n\nPros: \n- Interesting new approach for hiearchical reinforcement learning that focuses on skill versatility\n- The variational lower bound is one of the most principled formulations for hierarchical RL that I have seen so far\n- The results are convincing\n\nCons:\n- More comparisons against other DRL algorithms such as TRPO and PPO would be useful\n\nSummary: This is an interesting deep reinforcement learning paper that introduces a new principled framework for learning versatile skills. This is a good paper.\n\nMore comments:\n- There are several papers that focus on learning versatile skills in the context of movement primitive libraries, see [1],[2],[3]. These papers should be discussed.\n\n[1] Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2016). Hierarchical Relative Entropy Policy Search, Journal of Machine Learning Research (JMLR),\n[2] End, F.; Akrour, R.; Peters, J.; Neumann, G. (2017). Layered Direct Policy Search for Learning Hierarchical Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).\n[3] Gabriel, A.; Akrour, R.; Peters, J.; Neumann, G. (2017). Empowered Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).\n", "We are grateful for the insightful comments and suggestions.\n\nPlease find the answers to the inline questions below, we will clarify all of these points in the final version of the paper.\n- I am not sure how to interpret Figure 3. Do you use Bernoulli in the experiments? \n- A Bernoulli distribution is only used for for Figure 3 to demonstrate that our method can work with other distributions.\n\n- How many task IDs are used for each experiment? 2?\n- Yes, T was set to 2 for the manipulation experiments.\n\n- Are the manipulation experiments learned with the off-policy variant?\n- That is correct. All experiments were performed in an off-policy setting. This decision was made due to the higher sample-efficiency of the off-policy methods.\n\n- Figure 4b needs the colors to be labeled. Video clips of the samples would be a plus.\n- We will add the labels and address this problem in the final version of the paper\n\nRegarding the last question on training the embedding space on all of the tasks; we are currently working on this experiment and are planning to include it in the final version of the paper. It is worth noting that the multi-task RL training can be challenging (especially with poorly scaled rewards) and it maintains as an open problem that is beyond the scope of this work. Our method presents a solution to a problem of finding an embedding space that enables re-using, interpolating and sequencing previously learned skills, with the assumption that the RL agent was able to learn them in the first place. However, we strongly believe that the off-policy setup presented in this work has much more flexibility that its on-policy equivalents as to how to address the multi-task RL problem.\n", "We thank the reviewer for their comments and suggestions.\n\nOur method does indeed support parameterized skills as suggested by the reviewer. For instance, the low-level policy could receive an embedding conditioned on a continuous target location instead of the task ID (given a suitable embedding space). It is also not limited to the multi-task setting, i.e., the number of tasks T used for training can be set to 1 (as explored in the point-mass experiments). We will add this to the discussion to the paper.", "We very much appreciate the reviewer’s comments and suggestions. \n\nRegarding the comparison to other on-policy methods such as TRPO or PPO, we would like to emphasize that the presented approach is mostly independent of the underlying RL learning algorithm. In fact, it will be easier to implement our approach in the on-policy setup. The off-policy setup with experience replay that we are considering requires additional care due to the embedding variable which we also maintain in the replay buffer. In Section 5, we present all the modifications necessary to running our method in the more data-efficient off-policy setup, which we believe is crucial to running it on the real robots in the future.\n\nWe would also like to thank the reviewer for pointing out the additional references - we will be very happy to include them. While some of the high-level ideas are related, there are differences both in the formulation and the algorithmic framework. An important aspect of our work is that we show how to apply entropy-regularized RL with latent variables when working with neural networks and in an off-policy setting, avoiding both the burden of using a limited number of hand-crafted features and allowing for data-efficient learning.\n" ]
[ -1, -1, 7, -1, 7, 7, -1, -1, -1 ]
[ -1, -1, 4, -1, 4, 5, -1, -1, -1 ]
[ "Hk2Ttk_NM", "iclr_2018_rk07ZXZRb", "iclr_2018_rk07ZXZRb", "r1aiqauxf", "iclr_2018_rk07ZXZRb", "iclr_2018_rk07ZXZRb", "r1aiqauxf", "Hk9a7-qlG", "HkEQMXAxz" ]
iclr_2018_S1DWPP1A-
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration
Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.
accepted-poster-papers
This paper aims to improve on the intrinsically motivated goal exploration framework by additionally incorporating representation learning for the space of goals. The paper is well motivated and follows a significant direction of research, as agreed by all reviewers. In particular, it provides a means for learning in complex environments, where manually designed goal spaces would not be available in practice. There had been significant concerns over the presentation of the paper, but the authors put great effort in improving the manuscript according to the reviewers’ suggestions, raising the average rating by 2 points after the rebuttal.
train
[ "HJcQvaVef", "ByvGgjhez", "Bk9oIe5gG", "ByeHdGpXM", "Sk-lOfTmz", "rywYwMaXz", "SkrQDzp7M", "B1JbLzTQz", "S1h1Bz6Qf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper introduces a representation learning step in the Intrinsically Motivated Exploration Process (IMGEP) framework.\n\nThough this work is far from my expertise fields I find it quite easy to read and a good introduction to IMGEP.\nNevertheless I have some major concerns that prevent me from giving an acceptance decision.\n\n1) The method uses mechanisms than can project back and forth a signal to the \"outcome\" space. Nevertheless only the encoder/projection part seems to be used in the algorithm presented p6. For example the encoder part of an AE/VAE is used as a preprocesing stage of the phenomenon dynamic D. It should be obviously noticed that the decoder part could also be used for helping the inverse model I but apparently that is not the case in the proposed method.\n\n2) The representation stage R seems to be learned at the beginning of the algorithm and then fixed. When using DNN as R (when using AE/VAE) why don't you propagate a gradient through R when optimizing D and I ? In this way, learning R at the beginning is only an old good pre-training of DNN with AE.\n\n3) Eventually, Why not directly considering R as lower layers of D and using up to date techniques to train it ? (drop-out, weight clipping, batch normalization ...).\nWhy not using architecture adapted to images such as CNN ?\n\n", "[Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written. In light of these changes, I've updated my score.]\n\nSummary:\n\nThe authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a \"designer\" to manually specify the space of possible goals. This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a \"good\" distribution of goal states.\n\nHowever, even after multiple reads, much of the remainder of the paper remains unclear. Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow.\n\nThere are too many metrics and too few conclusions for this paper. The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper. Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results. For instance, in the middle of page 9, it is noted that some of the techniques \"yield almost as low performance as\" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors' approach. This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques).\n\nOther comments:\n- There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the \"Meta-Policy MSE is not a simple to interpret\" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well.\n- The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided?\n- The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI?\n- The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper.\n- The authors need to be clearer about their notation in a number of places. For instance, they use \\gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup.\n- It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained).\n\nIn short, the paper has some interesting ideas, yet lacks a clear takeaway message. Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results. Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing.\n\nPros:\n- A clear, well-written abstract and introduction\n- While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a \"feature space\" for their particular family of problems.\n\nCons:\n- The figure captions are all very \"matter-of-fact\" and, while they explain what each figure shows, provide no explanation of the results. The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone).\n- There is not much significance in the current form of the paper, owing to the lack of clear message. While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results. I.e. it is difficult for me to easily visualize all of the \"moving parts\" of this work: a figure showing the relationship bet\n- Too many individual ideas are presented in the paper, hurting clarity. As a result, the paper feels scattered. The authors do not have a clear message that neatly ties the results together.", "The paper investigates different representation learning methods to create a latent space for intrinsic goal generation in guided exploration algorithms. The research is in principle very important and interesting.\n\nThe introduction discusses a great deal about intrinsic motivations and about goal generating algorithms. This is really great, just that the paper only focuses on a very small aspect of learning a state representation in an agent that has no intrinsic motivation other than trying to achieve random goals.\nI think the paper (not only the Intro) could be a bit condensed to more concentrate on the actual contribution. \n\nThe contribution is that the quality of the representation and the sampling of goals is important for the exploration performance and that classical methods like ISOMap are better than Autoencoder-type methods. \n\nAlso, it is written in the Conclusions (and in other places): \"[..] we propose a new intrinsically Motivated goal exploration strategy....\". This is not really true. There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space. Also, there is no intrinsic motivation. I also think the title is misleading.\n\nThe paper is in principle interesting. However, I doubt that the experimental evaluations are substantial enough for profound conclusion. \n\nSeveral points of critic: \n- the input space was very simple in all experiments, not suitable for distinguishing between the algorithms, for instance, ISOMap typically suffers from noise and higher dimensional manifolds, etc.\n- only the ball/arrow was in the input image, not the robotic arm. I understand this because in phase 1 the robot would not move, but this connects to the next point:\n- The representation learning is only a preprocessing step requiring a magic first phase.\n -> Representation is not updated during exploration\n- The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment. \n- I am skeptical about the VAE and RFVAE results. The difference between Gaussian sampling and the KDE is a bit alarming, as the KL in the VAE training is supposed to match the p(z) with N(0,1). Given the power of the encoder/decoder it should be possible to properly represent the simple embedded 2D/3D manifold and not just a very small part of it as suggested by Fig 10. \nI have a hard time believing these results. I urge you to check for any potential errors made. If there are not mistakes then this is indeed alarming.\n\nQuestions:\n- Is it true that the robot always starts from same initial condition?! Context=Emptyset. \n- For ISOMap etc, you also used a 10dim embedding?\n\nSuggestion:\n- The main problem seems to be that some algorithms are not representing the whole input space.\n- an additional measure that quantifies the difference between true input distribution and reproduced input distribution could tier the algorithms apart and would measure more what seems to be relevant here. One could for instance measure the KL-divergence between the true input and the sampled (reconstructed) input (using samples and KDE or the like). \n- This could be evaluated on many different inputs (also those with a bit more complicated structure) without actually performing the goal finding.\n- BTW: I think Fig 10 is rather illustrative and should be somehow in the main part of the paper\n \nOn the positive side, the paper provides lots of details in the Appendix.\nAlso, it uses many different Representation Learning algorithms and uses measures from manifold learning to access their quality.\n\nIn the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:\nJ. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.\n\nand\n\nG. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\n\n\nTypos and small details:\np3 par2: for PCA you cited Bishop. Not critical, but either cite one the original papers or maybe remove the cite altogether\np4 par-2: has multiple interests...: interests -> purposes?\np4 par-1: Outcome Space to the agent is is ...\nSec 2.2 par1: are rapidly mentioned... -> briefly\nSec 2.3 ...Outcome Space O, we can rewrite the architecture as:\n and then comes the algorithm. This is a bit weird\nSec 3: par1: experimental campaign -> experiments?\np7: Context Space: the object was reset to a random position or always to the same position?\nFootnote 14: superior to -> larger than\np8 par2: Exploration Ratio Ratio_expl... probably also want to add (ER) as it is later used\nSec 4: slightly underneath -> slightly below\np9 par1: unfinished sentence: It is worth noting that the....\none sentence later: RP architecture? RPE?\nFig 3: the error of the methods (except FI) are really bad. An MSE of 1 means hardly any performance!\np11 par2: for e.g. with the SAGG..... grammar?\n\nPlots in general: use bigger font sizes.\n\n", "These comments suggest that the reviewer thinks that in the particular experiment we made, and thus the particular implementation of IMGEPs we used, we are training a single large neural network for learning forward and inversed models. We could have done this indeed, and in that case the reviewer' suggestion would recommend very relevantly to use the lower-layers and/or decoding projection of the (variational) auto-encoders. However, we are not using neural networks for learning forward and inverse models, but rather non-parametric methods based on memorizing examplars associating the parameters of DMPs and their outcomes in the embedding space (which itself comes from auto-encoders),\nin combination with local online regression models and optimization on these local models. This approach comes from the field of robotics, where is has shown extremely efficient for fast incremental learning of forward and inverse models. Comparing this approach with a full neural network approach (which might generalize better but have difficulties for fast incremental learning) would be a great topic for another paper. In the new version of the article, we have tried to improve the clarity of the description of the particular implementation of IMGEPs we have used. ", "> R1 \"The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment.\"\nSee general answer and new graphs in the paper: most algorithms actually perform very well from the main perspective of interest in the paper (exploration efficiency).\n\n> R1 \"- I am skeptical about the VAE and RFVAE results. If there are not mistakes then this is indeed alarming.\"\n> R1 \"- The main problem seems to be that some algorithms are not representing the whole input space.\n\nFollowing your remark, we double checked the code and made an in depth verification of results. A small bug indeed existed, which made the projection of points in latent space wider than it should be. This was fixed in those new experiments, and we validated that the whole input space was represented in the latent representation. Despite this, it didn't changed the conclusion drawn in the original paper. Indeed, our new results show the same type of behavior as in the first version, in particular:\n\t+ The exploration performances for VAE with KDE goal sampling distribution are still above Gaussian goal Sampling. Our experiments showed that convergence on the KL term of the loss can be more or less quick depending on the initialization. Since we used an number of iterations as stopping criterion for our trainings (based on early experiments), we found that sometimes, at stop, despite achieving a low reconstruction error, the divergence was still pretty high. In those cases the representation was not perfectly matching an isotropic gaussian, which lead to biased sampling.\n + The performances of the RFVAE are still worse than any other algorithms. Our experiments showed that they introduce a lot of discontinuities in the representation, which along with physics boundaries of achievable states, can generate \"pockets\" in the representation from which a Random Goal Exploration can't escape. This would likely be different for a more advanced exploration strategy such as Active Goal exploration. \n \n> R1 - Is it true that the robot always starts from same initial condition?! Context=Emptyset. \n\nyes. In (Forestier et al., ICDL-Epirob 2016), a similar setup is used except that the starting conditions are randomized at each new episode (and that goal representation are engineered): they show that the dynamics of exploration scales wells. Here we chose to start from the same initial condition to be able to display clearly in 2D the full space of discovered outcomes (if one would include the starting ball position, this would be a 4D space). \n\n> R1 - For ISOMap etc, you also used a 10dim embedding?\n\nyes.\n\n>In the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:\n>J. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.\n>and\n>G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\n\nyes, these are relevant papers indeed, which are cited in reviews we cite, but we added them for more coverage.", "> R1 \"The representation learning is only a preprocessing step requiring a magic first phase.\n> -> Representation is not updated during exploration\"\n> \"- only the ball/arrow was in the input image, not the robotic arm. I understand this because in phase 1 the robot would not move, but this connects to the next point:\"\n\nIndeed, representation is not updated during exploration, and as mentioned in the conclusion we think doing this is a very important direction for future work. However, we have two strong justification for this decomposition, that we added in the paper.\n\nFirst, we do not believe the preliminary pre-processing step is \"magical\". Indeed, if one studies the work from the developmental learning perspective outlined in the introduction, where one takes inspiration from the processes of learning in infants, then this decomposition corresponds to a well-known developmental progression: in their first few weeks, motor exploration in infants is very limited (due to multiple factors), while they spend a considerable amount of time observing what is happening in the outside world with their eyes (e.g. observing images of others producing varieties of effects on objects). During this phase, a lot of perceptual learning happens, and this is reused later on for motor learning (infant perceptual development often happens ahead of motor development in several important ways). In the article, the concept of \"social guidance\" presented in the introduction, and the availability of a database of observations of visual effects that can happen in the world, can be seen as a model of this first phase of infant learning by passively observing what is happening around them.\n\nA second justification for this decomposition is more methodological. It is mainly an experimental tool for better understanding what is happening. Indeed, the underlying algorithmic mechanisms are already quite complex, and analyzing what is happening when one decomposes learning in these two phases (representation learning, then exploration) is an important scientific step. Presenting in the same article another study where representations would be updated continuously would result in too much material to be clearly presented in a conference paper.\n\n> R1 \"the input space was very simple in all experiments, not suitable for distinguishing between the algorithms, for instance, ISOMap typically suffers from noise and higher dimensional manifolds\"\n\nThe use of the term \"simple\" depends on the perspective. From the perspective of a classical goal exploration process that would use the 4900 raw pixels as input, not knowing they are pixels and considering them similarly as when engineered representations are provided, then this is a complicated space and exploration is very difficult. At the same time, from the point of view of representation learning algorithms, this is indeed a moderately complex input space (yet, we on purpose did not consider convolutionnal auto-encoders so that the task is not too simplified and results could apply to other modalities such as sound or proprioception). Third, if one considers the dimensionality of the real sensorimotor manifold in which action is happening (2 for arm-ball, 3 for arm-arrow), this does not seem to us to be too unrealistic as many of real world sensorimotor tasks are actually happening in low-dimensional task spaces (e.g. rigid object manipulation happens in a 6D task space). So, overall we have chosen these experimental setups as we belive they are a good compromise between simplicity (enabling us to understand well what is happening) and complexity (if one considers the learner does not already knows that the stimuli are pixels of an image). ", "> R1 \"an agent that has no intrinsic motivation other than trying to achieve random goals.\"\n\"There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space. Also, there is no intrinsic motivation. I also think the title is misleading.\"\n\nThe concept of \"intrinsically motivated learning and exploration\" is not yet completely well-defined across (even computionational) communities, and we agree that the use of the term \"intrinsically motivated exploration\" in this article may seem unusual for some readers. However, we strongly think it makes sense to keep it for the following reasons.\n\nThere are several conceptual approaches to the idea of \"intrinsically motivated learning and exploration\", and we believe our use of the term intrinsic-motivation is compatible with all of them:\n\n- Focus on task-independance and self-generated goals: one approach of intrinsic motivation, rooted in its conceptual origins in psychology, is that it designates the set of mechanisms and behaviours of organized exploration which are not directed towards a single extrinsically imposed goal/problem (or towards fullfilling physiological motivations like food search), but rather are self-organized towards intrinsically defined objectives and goals (independant of physiological motivations like food search). From this perspective, mechanisms that self-generate goals, even randomly, are maybe the simplest and most prototypical form of intrinsically motivated exploration. \n \n- Focus on information-gain or competence-gain driven exploration: Other approaches consider that intrinsically motivated exploration specifically refers to mechanisms where choices of actions or goals are based on explicit measures of expected information-gain about a predictive model, or novelty or surprise of visited states, or competence gain for self-generated goals. In the IMGEP framework, this corresponds specifically to IMGEP implementations where the goal sampling procedure is not random, but rather based on explicit estimations of expected competence gain, like in the SAGG-RIAC architecture or in modular IMGEPs of (Forestier et al., 2017). In the experiments presented in this article, the choice of goals is made randomly as the focus is not on the efficiency of the goal sampling policy. However, it would be straightforward to use a selection of goals based on expected competence gain, and thus from this perspective the proposed algorithm adresses the general problem of how to learn goal representations in IMGEPs.\n\n- Focus on noverly/diversity search mechanisms: Yet another approach to intrinsically motivated learning and exploration is one that refers to mechanisms that organize the learner's exploration so that exploration of novel or diverse behaviours is fostered. A difference with the previous approach is that here one does not necessarily use internally a measure of novelty or diversity, but rather one uses it to characterize the dynamics of the behaviour. And an interesting property of random goal exploration implementations of IMGEPs is that while it does not measure explicitly novelty or diversity, it does in fact maximize it through the following mechanism: from the beginning and up to the point where the a large proportion of the space has been discovered, generating random goals will very often produce goals that are outside the convex hull of already discovered goals. This in turn mechanically leads to exploration of stochastic variants of motor programs that produce outcomes on the convex hull, which statistically pushes the convex hull further, and thus fosters exploration of motor programs that have a high probability to produce novel outcomes outside the already known convex hull. \n\n\n", "> R3 \"does not include significant explanation for the results\", \"The figure captions are all very \"matter-of-fact\" and, while they explain what each figure shows, provide no explanation of the results.\"\nWe agree. We have added several more detailed explanations of the results.\n\n> R3 \"why many of the deep representation techniques do not perform very well.\"\nWe think this comment is due to our unclear explanation of our main target combined with the use of a misleading measure (MSE). We hope the new explanation we provide, as well as the focus on exploration measures based on the KL divergence will enable to make it more clear that on the contrary several deep learning approaches are performing very well, some systematically outperforming the use of handcrafted goal space features (see the common answer to all reviewers).\n\n> R3 \"The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI?\"\n\nWe agree that this is a very important point. We have in the new version included results when one gives algorithms the right number of dimensions (2 for arm-ball, 3 for arm-arrow), and showing that providing more dimensions to IMGEP-UGL algorithms than the \"true\" dimensionality of the phenomenon can actually be beneficial (and we provide an explanation why this is the case). \n\n> \"The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided?\"\n\nFor each environments, we trained the networks with a dataset of 10.000 elements uniformly sampled in the underlying state-space. This corresponds to 100 samples per dimension for the 'armball' environment, and around 20 per dimension for the 'armarrow' environment. This is not far from the number of samples considered in the dsprite dataset, in which around 30 samples per dimensions are considered. Moreover, our early experiments showed that for those two particular problems, adding more data did not change the exploration results.\n\n> \"- The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper.\"\n\nWe have tried to better explain in words the general principles of this algorithm. \n\n> \"The authors need to be clearer about their notation in a number of places. For instance, they use gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup.\"\n\nWe have tried to correct these problems in notations.\n\n> \"It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained).\"\n\nyes indeed. We have added some new explanations.\n", "We thank all reviewers for their detailed comments, which have helped us a lot to improve our paper. On one hand, we appreciate that all reviewers found the overall approach interesting and important.\nOn the other hand, we agree with reviewers that there were shortcomings in paper, and we thank them for pointing ways in which it could be improved, which we have attempted to do in the new version of the article, that includes both new explanations and new experimental results. \n\nThe main point of the reviewers was that our text did not identify concisely and clearly the main contributions and conclusions of this article, and in particular did not enable the reader to rank the importance and focus of these contributions (from our point of view). The comment of reviewer R1, summarizing our contributions, actually shows that we have not explained clearly enough what was our main target contribution (see below).\nWe have added an explicit paragraph at the end of the introduction to outline and rank our contributions, as well as a paragraph at the beginning of the experimental section to pin point the specific questions to which the experiments provide an answer. We hope the messages are now much clearer.\n\nAnother point was that our initial text contained too many metrics, and lacked justification of their choices and relative importance. We have rewritten the results sections by focusing in more depth on the most important metrics (related to our target contributions), updating some of them with more standard metrics, and removing some more side metrics. The central property we are interested in in this article is the dynamics and quality of exploration of the outcome space, characterizing the (evolution of the) distribution of discovered outcomes, i.e. the diversity of effects that the learner discovers how to produce. In the initial version of the article, we used an ad hoc measure called \"exploration ratio\" to characterize the evolution of the global quality of exploration of an algorithm. We have now replaced this ad hoc measure with a more principled and more precise measure: the KL divergence between the discovered distribution of outcomes and the distribution produced by an oracle (= uniform distribution of points over the reachable part of the outcome space). This new measure is more precise as it much better takes into account the set of roll-outs which do not make the ball/arrow move at all. In the new version of the article, we can now see that this more precise measure enables to show that several algorithms actually approximate extremely well the dynamics of exploration IMGEPs using a goal space with engineered features, and that even some IMGEP-UGL algorithms (RGE-VAE) systematically outperform this baseline algorithm. Furthermore, we have now included plots of the evolution of the distribution of discovered outcomes in individual runs to enable the reader to grasp more clearly the progressive exploration dynamics for each algorithms.\n\nAnother point was that the MSE measure used in the first version of the article was very misleading. Indeed, it did not evaluate the exploration dynamics, but rather it evaluated a peculiar way to reuse in combination both the discovered data points and the learned representation in a particular kind of test (raw target images were given to the learner). This was misleading because 1) we did not explain well that it was evaluating this as opposed to the main target of this article (distribution of outcomes); 2) this test evaluates a rather exotic way to reuse the discovered data points (previous papers reused the discovered data in other ways). This lead R1 to infer that the algorithms were not not working well in comparison with the “Full Information” (FI) baseline (now called EFR, for \"Engineered Feature Representation\"): on the contrary, several IMGEP-UGL algorithms actually perform better from the perspective we are interested in here. As the goal of this paper is not to study how the discovered outcomes can be reused for other tasks, we have removed the MSE measures." ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1DWPP1A-", "iclr_2018_S1DWPP1A-", "iclr_2018_S1DWPP1A-", "HJcQvaVef", "rywYwMaXz", "SkrQDzp7M", "Bk9oIe5gG", "ByvGgjhez", "iclr_2018_S1DWPP1A-" ]
iclr_2018_ryRh0bb0Z
Multi-View Data Generation Without View Supervision
The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four images datasets on which we demonstrate the effectiveness of the model and its ability to generalize.
accepted-poster-papers
This paper presents an unsupervised GAN-based model for disentagling the multiple views of the data and their content. Overall it seems that this paper was well received by the reviewers, who find it novel and significant . The consensus is that the results are promising. There are some concerns, but the major ones listed below have been addressed in the rebuttal. Specifically: - R3 had a concern about the experimental evaluation, which has been addressed in the rebuttal. - R2 had a concern about a problem inherent in this setting (what is treated as “content”), and the authors have clarified in the discussion the assumptions under which such methods operate. - R1 had concerns related to how the proposed model fits in the literature. Again, the authors have addressed this concern adequately.
train
[ "r1Ojef4gf", "SyAnSJdxf", "r1aAVyagf", "SkaF99UXf", "HJpLqqIXf", "r1gec9IQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes a GAN-based method for image generation that attempts to separate latent variables describing fixed \"content\" of objects from latent variables describing properties of \"view\" (all dynamic properties such as lighting, viewpoint, accessories, etc). The model is further extended for conditional generation and demonstrated on a range of image benchmark data sets.\n\nThe core idea is to train the model on pairs of images corresponding to the same content but varying in views, using adversarial training to discriminate such examples from generated pairs. This is a reasonable procedure and it seems to work well, but also conceptually quite straightforward -- this is quite likely how most people working in the field would solve this problem, standard GAN techniques are used for training the generator and discriminator, and the network architecture is directly borrowed from Radford et al. (2015) and not even explained at all in the paper. The conditional variant is less obvious, requiring two kinds of negative images, and again the proposed approach seems technically sound.\n\nGiven the simplicity of the algorithmic choices, the potential novelty of the paper lies more in the problem formulation itself, which considers the question of separating two sets of latent variables from each other in setups where one of them (the \"view\") can vary from pair to pair in arbitrary manner and no attributes characterising the view are provided. This is an interesting problem setup, but not novel as such and unfortunately the paper does not do a very good job in putting it into the right context. The work is contrasted only against recent GAN-based image generation literature (where covariates for the views are often included) and the aspects related to multi-view learning are described only at the level of general intuition, instead of relating to the existing literature on the topic. The only relevant work cited from this angle is Mathieu et al. (2016), but even that is dismissed lightly by saying it is worse in generative tasks. How about the differences (theoretical and empirical) between the proposed approach and theirs in disentangling the latent variables? One would expect to see more discussion on this, given the importance of this property as motivation for the method.\n\nThe generative story using three sets of latent variables, one shared, to describe a pair of objects corresponds to inter-battery factor analysis (IBFA) and is hence very closely related to canonical correlation analysis as well (Tucker \"An inter-battery method of factor analysis\", Psychometrika, 1958; Klami et al. \"Bayesian canonical correlation analysis\", JMLR, 2013). Linear CCA naturally would not be sufficient for generative modeling and its non-linear variants (e.g. Wang et al. \"Deep variational canonical correlation analysis\", arXiv:1610.03454, 2016; Damianou et al. \"Manifold relevance determination\", ICML, 2012) would not produce visually pleasing generative samples either, but the relationship is so close that these models have even been used for analysing setups identical to yours (e.g. Li et al. \"Cross-pose face recognition by canonical correlation analysis\", arXiv:1507.08076, 2015) but with goals other than generation. Consequently, the reader would expect to learn something about the relationship between the proposed method and the earlier literature building on the same latent variable formulation. A particularly interesting question would be whether the proposed model actually is a direct GAN-based extension of IBFA, and if not then how does it differ. Use of adversarial training to encourage separation of latent variables is clearly a reasonable idea and quite likely does better job than the earlier solutions (typically based on some sort of group-sparsity assumption in shared-private factorisation) with the possible or even likely exception of Mathieu at al. (2016), and aspects like this should be explicitly discussed to extend the contribution from pure image generation to multi-view literature in general.\n\nThe empirical experiments are somewhat non-informative, relying heavily on visual comparisons and only satisfying the minimum requirement of demonstrating that the method does its job. The results look aesthetically more pleasing than the baselines, but the reader does not learn much about how the method actually behaves in practice; when does it break down, how sensitive it is to various choices (network structure, learning algorithm, amount of data, how well the content and view can be disentangled from each other, etc.). In other words, the evaluation is a bit lazy somewhat in the same sense as the writing and treatment of related work; the authors implemented the model and ran it on a collection of public data sets, but did not venture further into scientific reporting of the merits and limitations of the approach.\n\nFinally, Table 1 seems to have some min/max values the wrong way around.\n\n\nRevision of the review in light of the author response:\nThe authors have adequately addressed my main remarks, and while doing so have improved both the positioning of the paper amongst relevant literature and the somewhat limited empirical comparisons. In particular, the authors now discuss alternative multi-view generative models not based on GANs and the revised paper includes considerably extended set of numerical comparisons that better illustrate the advantage over earlier techniques. I have increased my preliminary rating to account for these improvements.", "This paper firstly proposes a GAN architecture that aim at decomposing the underlying distribution of a particular class into \"content\" and \"view\". The content can be seen as an intrinsic instantiation of the class that is independent of certain types of variation (eg viewpoint), and a view is the observation of the object under a particular variation. The authors additionally propose a second conditional GAN that learns to generate different views given a specific content. \n\nI find the idea of separating content and view interesting and I like the GMV and CGMV architectures. Not relying on manual attribute/class annotation for the views is also positive. The approach seems to work well for a relatively clean setup such as the chair dataset, but for the other datasets the separation is not so apparent. For example, in figure 5, what does each column represent in terms of view? It seems that it depends heavily on the content. That raises the question of how useful it is to have such a separation between content and views; for some datasets their diversity can be a bottleneck for this partition, making the interpretation of views difficult. \n\nA missing (supervised) reference that considers also the separation of content and views.\n[A] Learning to generate chairs with convolutional neural networks, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, CVPR 15\n\nQ:Figure 5, you mean \"all images in a column were generated with the same view vector\"\nQ: Why on Figure 7 you use different examples for CGAN?", "The paper proposes a new generative model based on the Generative Adversarial Network (GAN). The method disentangles the content and the view of objects without view supervision. The proposed Generative Multi-View (GMV) model can be considered to be an extension of the traditional GAN, where the GMV takes the content latent vector and the view latent vector as input. In addition, the GMV is trained to generate a pair of objects that share the content but with different views. In this way, the GMV successfully models the content and the view of the objects without using view labels. The paper also extends GMV into a conditional generative model that takes an input image and generates different views of the object in the input image. Experiments are conducted on four different datasets to show the generative ability of the proposed method.\n\nPositives:\n- The proposed method is novel in disentangling the content and the view of objects in a GAN and training the GAN with pairs of objects. By using pairs that share the content but with different views, the model can be trained successfully without using view labels.\n\n- The experimental results on the four datasets show that the proposed network is able to model the context and the view of objects when generating images of these objects.\n\nNegatives:\n- The paper only shows comparison between the proposed method and several baselines: DCGAN and CGAN. There is no comparison with methods that also disentangle the content from the view such as Mathieu et al. 2016.\n\n- For the comparison with CGAN in Figure 7, it would be better to show the results of C-GMV and CGAN on the same input images. Then it is easier for the readers to see the differences in the results from the two methods. ", "We thank the reviewer for the comments and feedback. We apologize for the late reply due to the large number of experiments that have been made to improve the quality of the paper.\n\nConcerning the fact that our generative model is “conceptually quite straightforward”, we would like to emphasis that the proposed paper is as far as we know the first paper to evaluate this idea of using a discriminator on pairs of outputs for the multiview problem, this discriminator being in charge of telling is the two outputs correspond to the same object. \n\nWe acknowledge the reviewer for pointing us this extensive literature on IBFA and on similar ideas in CCA and non linear variants of CCA. Of course our method is clearly related to this literature and we added this related work on the state if the art section. As suggested by the reviewer the assumption made by our method is very similar to the one made with IBFA models. The main difference being in the way the models are learned: by using ‘strong‘ regularization and particular factorization functions in the IBFA literature, or by using a discriminator in our case. Note also that most experiments in the IBFA literature are based on datasets where a limited finite number of possible views is provided while our model is evaluated on complex datasets with multiple possible views, without any available view supervision. A detailed discussion on this point has been added in Section 6. \n\nAbout Radford architecture. Yes we do reuse the architecture in [Radford et al., 2015] for the DCGAN architecture because the core idea of the paper is elsewhere, as it was the case for [Mathieu et al., 2016]. Actually the main features of our method, its ability to learn from data whose views are not aligned between objects and which are unlabeled comes from our particular learning scheme and the way we build pairs of examples. This is why we focus the presentation of our method on this particular way of constructing training examples for our models.\n\nPlease consider that we have added a large additional experimental section that objectively evaluates the quality of the generated samples of the different models (GMV, CMGV, GANx, CGAN and Mathieu et al.) in terms of quality of the outputs, and in terms of diversity of the generated samples, showing the superiority of our model w.r.t these baselines (new section 5.3, pages 11 to 13 of the new version)\n", "We thank the reviewer for the comments and feedback. We apologize for the late reply due to the large number of experiments that have been made to improve the quality of the paper.\n\nAs far as we understand, the main concern is about the fact that the interpretation of the notion of view can be difficult depending on the nature of the dataset. We agree on that point. Indeed, what we call ‘content’ in this paper corresponds to the invariant factors contained in a set of images representing a same object, the view corresponding to the remaining ‘changing’ factors. This is the assumption also made for the IBFA and CCA based approaches (see next review). We have added a discussion on this point in the paper in the literature review section. Note also that the more difficult interpretation of views in our work is the counterpart of the increased ability of the method to deal with various datasets.\n\nConcerning the suggested reference, our related work is focused on models that are not based on view supervision. \n\nNote that we have added a large additional experimental section that objectively evaluates the quality of the generated samples of the different models (GMV, CMGV, GANx, CGAN and Mathieu et al.) in terms of quality of the outputs, and in terms of diversity of the generated samples, showing the superiority of our model w.r.t these baselines (new section 5.3, pages 11 to 13 of the new version)\n", "We thank the reviewer for the comments and feedback. We apologize for the late reply due to the number of additional experiments that have been made to improve the quality of the paper.\n\nThe first concern of the reviewer is about the lack of comparisons with other techniques. We updated the paper with results obtained on the same tasks with the approach by Mathieu et al. 2016 which is the closest to ours. Note that we were able to obtain comparable quality of outputs using the Mathieu et al. model by carefully testing many different neural networks architectures, the ones being provided in the open-source implementation, provided by the authors being inefficient on our problems. The quality of the generated samples of the different models (GMV, CMGV, GANx, CGAN and Mathieu et al.) have been evaluated in terms of quality of the outputs, and in terms of diversity of the generated samples, showing the superiority of our model w.r.t these baselines (new section 5.3, pages 11 to 13 of the new version)\n\nWe have also taken care to illustrate samples of the different models based on the same input images to allow for a better qualitative comparison (Figure 8)\n" ]
[ 7, 5, 7, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1 ]
[ "iclr_2018_ryRh0bb0Z", "iclr_2018_ryRh0bb0Z", "iclr_2018_ryRh0bb0Z", "r1Ojef4gf", "SyAnSJdxf", "r1aAVyagf" ]
iclr_2018_SyYe6k-CW
Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games. However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved. Thompson Sampling and its extension to reinforcement learning provide an elegant approach to exploration that only requires access to posterior samples of the model. At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical. Thus, it is attractive to consider approximate Bayesian neural networks in a Thompson Sampling framework. To understand the impact of using an approximate posterior on Thompson Sampling, we benchmark well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems. We found that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario. In particular, we highlight the challenge of adapting slowly converging uncertainty estimates to the online setting.
accepted-poster-papers
This paper is not aimed at introducing new methodologies (and does not claim to do so), but instead it aims at presenting a well-executed empirical study. The presentation and outcomes of this study are quite instructive, and with the ever-growing list of academic papers, this kind of studies are a useful regularizer.
val
[ "rkI9YHhlz", "Hk6R4RIEM", "H11if2uxf", "HyxcSZ9lG", "rkN31yEVM", "BkRk8K_Mz", "SJniHtdMf", "B14GjL_GG", "ByW1BBdMf", "r1VQYmOMG", "SyCf87dGG", "BynWicwMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "This paper presents the comparison of a list of algorithms for contextual bandit with Thompson sampling subroutine. The authors compared different methods for posterior estimation for Thompson sampling. Experimental comparisons on contextual bandit settings have been performed on a simple simulation and quite a few real datasets.\n\nThe main paper + appendix are clearly written and easy to understand. The main paper itself is very incomplete. The experimental results should be summarized and presented in the main context. There is a lack of novelty of this study. Simple comparisons of different posterior estimating methods do not provide insights or guidelines for contextual bandit problem. \n\nWhat's the new information provided by running such methods on different datasets? What are the newly observed advantages and disadvantages of them? What could be the fundamental reasons for the variety of behaviors on different datasets? No significant conclusions are made in this work.\n\nExperimental results are not very convincing. There are lots of plots show linear cumulative regrets within the whole time horizon. Linear regrets represent either trivial methods or not long enough time horizon.\n", "The refinements made it a stronger submission. The authors also promised to release the code for reproducibility as Reviewer 1 recommended. I'm happy to change the score from 4 to 5.", "If two major questions below are answered affirmatively, I believe this article could be very good contribution to the field and deserve publication in ICLR.\n\nIn this article the authors provide a service to the community by comparing the current most used algorithms for Thompson Sampling-based contextual (parametric) bandits on clear empirical benchmark. They reimplement the key algorithms, investing time to make up for the lack of published source code for some. \n\nAfter a clear exposure of the reasons why Thompson Sampling is attractive, they overview concisely the key ideas behind 7 different families of algorithms, with proper literature review. They highlight some of the subtleties of benchmarking bandit problems (or any active learning algorithms for that matter): the lack of counterfactual and hence the difference in observed datasets. They explain their benchmark framework and datasets, then briefly summarise the results for each class of algorithms. Most of the actual measures from the benchmark are provided in a lengthy appendix 12 pages appendix choke-full of graphs and tables.\n\nIt is refreshing to see an article that does not boast to offer the new \"bestest-ever\" algorithm in town, overcrowding a landscape, but instead tries to prune the tree of possibilities and wading through other people's inflated claims. To the authors: thank you! It is too easy to dismiss these articles as \"pedestrian non-innovative groundwork\": if there were more like it, our field would certainly be more readable and less novelty-prone.\n\nOf course, there is no perfect benchmark, and like every benchmark, the choices made by the authors could be debated to no end. At least, the authors try to explain them, and the tradeoffs they faced, as clearly as possible (except for two points mentioned below), which again is too rare in our field. \n\nMajor clarifications needed:\n\nMy two key questions are:\n* Is the code of good quality, with exact reproducibility and good potential extension in a standard language (e.g. Python)? This benchmark only gets its full interest if the code is publicised and well engineered. The open-sourcing is planned, according to footnote 1, is planned -- but this should be made clearer in the main text. There is no discussion of the engineering quality, not even of the language used, and this is quite important if the authors want the community to build upon this work. The code was not submitted for review, and as such its accessibility to new contributors is unknown to this reviewer. That could be a make or break feature of this work. \n* Is the hyper parameter tuning reproducible? Hyperparameter tuning should be discussed much more clearly (in the Appendix): while I appreciate the discussion page 8 of how they were frozen across datasets, \"they were chosen through careful tuning\" is way too short. What kind of tuning? Was it manual, and hence not reproducible? Or was it a clear, reproducible grid search or optimiser? I thoroughly hope for the later, otherwise an unreproducible benchmark would be very \n\nIf the answers to the two questions above is \"YES\", then brilliant article, I am ready to increase my score. However, if either is a \"NO\", I am afraid that would limit to how much this benchmark will serve as a reference (as opposed to \"just one interesting datapoint\").\n\n\nMinor improvements:\n* Please proofread some obvious typos: \n - page 4 \"suggesed\" -> \"suggested\", \n - page 8 runaway math environment wreaking the end of the sentence.\n - reference \"Meire Fortunato (2017)\" should be \"Fortunato et al. (2017)\", throughout.\n* Improve readability of figures' legends, e.g. Figure 2.(b) key is un-readable. \n* A simple table mapping the name of the algorithm to the corresponding article is missing. Not everyone knows what BBB and BBBN stands for.\n* A measure of wall time would be needed: while computational cost is often mentioned (especially as a drawback to getting proper performance out of variational inference), it is nowhere plotted. Of course that would partly depend on the quality of the implementation, but this is somewhat mitigated if all the algorithms have been reimplemented by the authors (is that the case? please clarify).", "The paper \"DEEP BAYESIAN BANDITS SHOWDOWN\" proposes a comparative study about bandit approaches using deep neural networks. \n\nWhile I find that such a study is a good idea, and that I was really interested by the listing of the different possibilities in the algorithms section, I regret that the experimental results given and their analysis do not allow the reader to well understand the advantages and issues of the approaches. The given discussion is not enough connected to the presented results from my point of view and it is difficult to figure out what is the basis of some conclusion.\n\nAlso, the considered algorithms are not enough described to allow the reader to have enough insights to fully understand the proposed arguments. Maybe authors should have focused on less algorithms but with more implementation details. Also, what does not help is that it is very hard to conect the names in the result table with the corresponding approaches (some abbreviations are not defined at all - BBBN or RMS for instances).\n\nAt last, the experimental protocol should be better described. For instance it is not clear on how the regret is computed : is it based on the best expectation (as done in most os classical studies) or on the best actual score of actions? The wheel bandit protocol is also rather hard to follow (and where is the results analysis?).\n\nOther remarks:\n - It is a pitty that expectation propagation approaches have been left aside since they correspond to an important counterpart to variational ones. It would have been nice to get a comparaison of both; \n - Variational inference decsription in section algorithms is not enough developped w.r.t. the importance of this family of approaches\n - Neural Linear is strange to me. Uncertainty does not consider the neural representation of inputs ? How does it work then ?\n - That is strange that \\Lambda_0 and \\mu_0 do not belong to the stated asumptions in the linear methods part (ok they correspond to some prior but it should be clearly stated)\n - Figure 1 is referenced very late (after figure 2)\n\n\n", "A new version of the paper is now available. We updated the initial submission based on the reviews and the feedback provided in the additional comments.\n\nThe main changes to the initial version are the following:\n\n- Implemented and tested an expectation-propagation algorithm (black-box alpha-divergence).\n- Implemented and tested the sparse GP algorithm.\n- Extended the algorithm description of variational inference methods, dropout, and sparse GPs. Also, we added the description of expectation-propagation methods.\n- Extended the explanation of priors used by linear models.\n- Extended the explanation of the Wheel bandit, and added explanatory plots to the main text.\n- Extended the example that compares BBB with linear methods versus PrecisionDiag, and added two outcome plots to the main text.\n- Updated and extended the experimental framework description, mainly metrics, regret, and hyper-parameter tuning.\n- Updated and extended the discussion section, putting more focus on linking the statements to the empirical results in the tables.\n\n- Added table that links names to specific algorithm configurations.\n- Added table with the running time required by each algorithm and dataset.\n- Added ranking column to cumulative regret table, where the mean ranking of each algorithm across datasets is shown. This way it is easier to parse the connection between the final big-picture conclusions and the empirical results.\n\n- Removed cumulative and simple regret plots, given their low information content.", "While collecting real-world datasets for a benchmark is challenging, the ones that we use are diverse. Some of them are not learnable or solvable (like Jester), while still of interest due to their practical applications (recommendation systems, in this case). For most datasets, we set the horizon to be the full size of the dataset, so it cannot be increased. The regret appears linear because these are simply hard problems. Some dataset-dependent conclusions can be drawn: the Gaussian process does well on small datasets where it can handle a large proportion of the data, whereas constant-SGD performs much better on larger data.\n\n[1] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and Ryan Adams. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, 2015.", "We thank the reviewer for their feedback. The reviewer raises several important concerns, which we address below.\n\nOverall, the main concerns were a lack of insightful conclusions/practical guidelines and that the paper relies too heavily on the appendix. Unfortunately, due to poor organization and writing, the insights we gained from the empirical benchmark were not made clear. We plan to significantly revise the paper for clarity. We briefly summarize our contributions and the insights we derived from the empirical results:\n\nSeveral recent papers claim to innovate on exploration with deep neural networks (e.g., two concurrent ICLR submissions: https://openreview.net/forum?id=ByBAl2eAZ, https://openreview.net/forum?id=rywHCPkAW). We argue that such innovations should be benchmarked against existing literature and baselines on simple decision making tasks (if the methods don’t improve on contextual bandits, how could they hope to improve in RL?). Our major contribution is this empirical comparison - a series of reproducible benchmarks with baseline implementations (all of which will be open sourced). We hope that the reviewer agrees that this empirical benchmark is a scientifically useful contribution.\n \nFrom the empirical benchmark, we find that:\n\n1) Variational approaches to estimate uncertainty in neural networks are an active area of research, however, to the best of our knowledge, there is no study that systematically benchmarks variational approaches in decision-making scenarios against other state-of-the-art approaches.\n\nFrom our evaluation, surprisingly, we find that Bayes by Backprop (BBB) underperforms even with a linear model. We demonstrate that because the method is simultaneously learning the representation and the uncertainty level, when faced with a limited optimization budget (for online learning), slow convergence becomes a serious concern. In particular, when the fitted model is linear, we evaluate the performance of a mean field model which we we can solve in closed form for the variational objective. We find that as we increase number of training iterations for BBB, it slowly converges to the performance of this exact method (Fig 25). We also see that the difference can be much larger than the degradation due to using a mean field approximation. We plan to move this experiment to the main text and expand upon the details.\n\nThis is not a problem in the supervised learning setting, where we can train until convergence. Unfortunately, in the online learning setting, this is problematic, as we cannot train for an unreasonable number of iterations at each step, so poor uncertainty estimates lead to bad decisions. Additionally, tricks to speed up convergence of BBB, such as initializing the variance parameters to a small value, distort uncertainty estimates and thus are not applicable in the online decision making setting.\n\nWe believe that these insights into the problems with variational approaches are of value to the community, and highlight the need for new ways to estimate uncertainty for online scenarios (i.e., without requiring great computational power). \n\n2) We study an algorithm, which we call NeuralLinear, that is remarkably simple, and combines two classic ideas (NNs and Bayesian linear regression). A very similar algorithm was used before in Bayesian optimization [1] and an independent ICLR submission (https://openreview.net/forum?id=Bk6qQGWRb) proposes nearly the same algorithm for RL. In our evaluation, NeuralLinear performs well across datasets. Our insight is that, once the learned representation is of decent quality, being able to exactly compute the posterior in closed form with something as simple as a linear model already leads to better decisions than most of the other methods. We believe this simple argument is novel and encourages further development of this promising approach.\n\n3) More generally, an interesting observation is that in many cases the stochasticity induced by stochastic gradient descent is enough to perform an implicit Thompson sampling. The greedy approach sometimes suffices (or conversely is equally bad as approximate inference). However, we also proposed the wheel problem, where the need for exploration is smoothly parameterized. In this case, we see that all greedy approaches fail.", "First, we would like to thank the reviewer for their feedback.\n\nWe acknowledge that the submitted version of the paper does not clearly connect the numerical results and our conclusions and claims. For the revision, we are focused on improving clarity. We plan to expand the discussion of the results and to add tables that summarize the relative ranking among algorithms across datasets to make comparison simpler.\n\nMoreover, we plan to extend the sections corresponding to algorithm descriptions and experimental setup. We also now include a table that explains the abbreviated algorithm names and hyperparameter settings (e.g., difference between RMS2 and RMS3, etc.).\n\nRegret is computed based on the best expected reward (as is standard). For some real datasets, the rewards were deterministic, in which case, both definitions of regret agree. We reshuffle the order of the contexts, and rerun the experiment a number of times to obtain the cumulative regret distribution and report its statistics. We now clarify this procedure in the experimental setup section.\n\nWe agree that the wheel bandit protocol was not clearly explained, and we have expanded the description. \n\nWe agree that expectation propagation methods are relevant to this study, so we have implemented the black-box alpha-divergence algorithm [1] and will add it to the study. \n\nNeuralLinear is based on a standard deep neural network. However, decisions are made according to a Bayesian linear regression applied to the features at the last layer of the network. Note that the last hidden layer representation determines the final output of the network via a linear function, so we can expect a representation that explains the expected value of an action with a linear model. For all the training contexts, their deep representation is computed, and then uncertainty estimates on linear parameters for each action are derived via standard formulas. Thompson sampling will sample from this distribution, say \\beta_t,i at time t for action i, and the next context will be pushed through the network until the last layer, leading to its representation c_t. Then, the sampled beta’s will predict an expected value, and the action with the highest prediction will be taken. Importantly, the algorithm does not use any uncertainty estimates on the representation itself (as opposed to variational methods, for example). On the other hand, the way the algorithm handles uncertainty conditional on the representation and the linear assumption is exact, which seems to be key to its success.\n\nWe will add a comment explaining the assumed prior for linear methods.\n\n[1] Hernández-Lobato, J. M., Li, Y., Rowland, M., Hernández-Lobato, D., Bui, T., and Turner, R. E. (2016). Black-box α-divergence minimization. In International Conference on Machine Learning.", "Personally, I agree with reviewer1, and believe this work could be very good contribution to the community for benchmarking the algorithms, if the implementation is of quality, reproducible and public available. Good luck.\n\n", "We appreciate your interest and feedback on this paper. Given the number of algorithms we compare, we unfortunately could not give a complete treatment of the background on each method. \n\n-\"why (Neal, 1994) is cited for SGLD?\"\nNeal is cited for SGLD because in his thesis he proposes Langevin dynamics (and HMC) for neural networks. He experiments in Section 3.5.1 (page 103) with what he refers to as \"partial gradients\", which are gradient updates computed from single examples. While he doesn't refer to it directly as stochastic gradient Langevin dynamics, we think it's fair to consider this the seed of the basic idea. Teh and Welling also cite that work in the SGLD paper. \n\n- \"SGLD is the mini-batch version of Langevin dynamics (1st order), while SGHMC is the mini-batch version of HMC (2nd order).\"\nThis seems like splitting hairs... I appeal to Neal on the connection to Langevin dynamics. From Neal, (Section 5.2 of https://arxiv.org/pdf/1206.1901.pdf): \"The Langevin method: A special case of Hamiltonian Monte Carlo arises when the trajectory used to propose a new state consists of only a single leapfrog step.\" Nevertheless, we will try to make our wording more precise. A mention of SG-HMC seems warranted - however we didn't consider the methods because of the higher order terms involved.\n\n- \"It seems the correct reference is \"Li, C., Chen, C., Carlson, D.E. and Carin, L. Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks, AAAI 2016\", NOT \"Li, Ke, Swersly, Kevin, Ryan, and Zemel, Richard S. Efficient feature learning using perturb-and-map. NIPS Workshop on Perturbations, Optimization, and Statistics, 2013.\"\nThanks for finding that! Yes, that should certainly be the other Li et al.\n\n\n- \"2. Connection between SGLD (in paragraph \"Monte Carlo\") and injecting noise on model parameters (in paragraph \"Direct Noise Injection\"). In terms of update rule, these two algorithms seem very similar: the update quantity in both consists of two parts: the gradient term and Gaussian noise term\" \nThere are subtle (and tremendously interesting) connections between many of the methods presented here. Parameter noise is also related to variational inference (if the posterior is assumed to be a diagonal unit variance Gaussian). Yes, one can see how parameter noise (adding noise to the weights) can be thought of as related to SGLD (adding noise to the gradient updates). It is e.g. easy to formalize the distinction between these two with a linear model with squared error loss. You will see that the scale of the noise added is very different and this is compounded with non-linearities. However, the major difference is that SGLD is running a Markov chain while each sample from parameter noise is a draw from a diagonal Gaussian centered at the mean. Thus SGLD can be shown to sample from the posterior (albeit under some strong assumptions) while parameter noise draws samples from a Gaussian approximation centered around the MAP estimate (the variance of which is left as a hyperparameter).\n\n\"Could the authors clarify the connections and compare them empirically if they are different?\"\nAs stated above, they are rather different. The paper is essentially an empirical comparison of the different methods and they behave tremendously differently empirically.", "We thank the reviewer for carefully reading the manuscript and for their thoughtful feedback.\n\nTo address the primary concerns:\n\n1 - The code is written in Python and Tensorflow, and will be committed to a well-known Anonymized open source library. Currently, the code is going through third party code review within our organization and is subject to a high quality standard. We designed the implementation so that adding new algorithms and rerunning the benchmark is straightforward for an external contributor.\n\n2 - We agree that making the hyperparameter selection reproducible is essential. To this end, we will re-run the experiments doing the following: 1) we will choose two representative datasets and apply Bayesian optimization to find parameters for each algorithm based on the results from the training datasets. Then, we will freeze these parameters for the remaining datasets and report numbers (and parameters) on these heldout datasets. We will update this post when we have revised the manuscript with the new numbers.\n\nFinally, we have fixed the typos and improved the figures' legends. We added a table mapping algorithm names to their meaning and parameters. We agree that a table showing wall clock time for each algorithm is highly informative, and we plan to add that to the revised manuscript.\n\nWe confirm that the authors reimplemented all of the algorithms.\n", "Two comments:\n\n1. The description on the literature on Stochastic Gradient MCMC seems not accurate: \n\n\"A variety of methods have been developed to approximate HMC using mini-batch\nstochastic gradients, These Stochastic Gradient Langevin Dynamics (SGLD) methods (Neal, 1994;\nWelling & Teh, 2011) add Gaussian noise...\"\n\nSGLD is the mini-batch version of Langevin dynamics (1st order), while SGHMC is the mini-batch version of HMC (2nd order). Also, why (Neal, 1994) is cited for SGLD?\n\n\"Li et al. (2013) show that a preconditioner based on the RMSprop algorithm performs well\non deep neural networks.\"\n\nIt seems the correct reference is \"Li, C., Chen, C., Carlson, D.E. and Carin, L. Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks, AAAI 2016\", NOT \"Li, Ke, Swersly, Kevin, Ryan, and Zemel, Richard S. Efficient feature learning using perturb-and-map. NIPS Workshop on Perturbations, Optimization, and Statistics, 2013.\"\n\n2. Connection between SGLD (in paragraph \"Monte Carlo\") and injecting noise on model parameters (in paragraph \"Direct Noise Injection\")\n\nIn terms of update rule, these two algorithms seem very similar: the update quantity in both consists of two parts: the gradient term and Gaussian noise term. Could the authors clarify the connections and compare them empirically if they are different?\n\n" ]
[ 5, -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyYe6k-CW", "BkRk8K_Mz", "iclr_2018_SyYe6k-CW", "iclr_2018_SyYe6k-CW", "iclr_2018_SyYe6k-CW", "SJniHtdMf", "rkI9YHhlz", "HyxcSZ9lG", "r1VQYmOMG", "BynWicwMz", "H11if2uxf", "iclr_2018_SyYe6k-CW" ]
iclr_2018_H15odZ-C-
Semantic Interpolation in Implicit Models
In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.
accepted-poster-papers
The paper presents a modified sampling method for improving the quality of interpolated samples in deep generative models. There is not a great amount of technical contributions in the paper, however it is written in a very clear way, makes interesting observations and analyses and shows promising results. Therefore, it should be of interest to the ICLR community.
train
[ "S16ZxNFgz", "BJDXbk5lM", "rka1Lw2xf", "r1y8QktmG", "BJjJmJY7G", "H18cf1Y7G", "rkPw6C_mG", "ByNZgIKgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The paper concerns distributions used for the code space in implicit models, e.g. VAEs and GANs. The authors analyze the relation between the latent space dimension and the normal distribution which is commonly used for the latent distribution. The well-known fact that probability mass concentrates in a shell of hyperspheres as the dimensionality grows is used to argue for the normal distribution being sub-optimal when interpolating between points in the latent space with straight lines. To correct this, the authors propose to use a Gamma-distribution for the norm of the latent space (and uniform angle distribution). This results in more mass closer to the origin, and the authors show both that the midpoint distribution is natural in terms of the KL divergence to the data points, and experimentally that the method gives visually appealing interpolations.\n\nWhile the contribution of using a standard family of distributions in a standard implicit model setup is limited, the paper does make interesting observations, analyses and an attempt to correct the interpolation issue. The paper is clearly written and presents the theory and experimental results nicely. I find that the paper can be accepted but the incremental nature of the contribution prevents a higher score.", "The authors discuss a direct Gamma sampling method for the interpolated samples in GANs, and show the improvements over usual normal sampling for CelebA, MNIST, CIFAR and SVHN datasets.\n\nThe method involves a nice, albeit minor, trick, where the chi-squared distribution of the sum of the z_{i}^{2} has its dependence on the dimensionality removed. However I am not convinced by the distribution of \\|z^\\prime\\|^{2} in the first place (eqn (2)): the samples from the gaussian will be approximately orthogonal in high dimensions, but the inner product will be at least O(1). Thus although the \\|z_{0}\\|^{2} and \\|z_{1}\\|^{2} are chi-squared/gamma, I don't think \\|z^\\prime\\|^{2} is exactly gamma in general.\n\nThe experiments do show that the interpolated samples are qualitatively better, but a thorough empirical analysis for different dimensionalities would be welcome. Figures 2 and 3 do not add anything to the story, since 2 is just a plot of gamma pdfs and 3 shows the difference between the constant KL and the normal case that is linear in d. \n\nOverall I think the trick needs to be motivated better, and the experiments improved to really show the import of the d-independence of the KL. Thus I think this paper is below the acceptance threshold.", "The authors propose the use of a gamma prior as the distribution over \nthe latent representation space in GANs. The motivation behind it is that \nin GANs interpolating between sampled points is common in the process of generating examples but the use of a normal prior results in samples that fall in low probability mass regions. The use of the proposed gamma distribution, as a simple alternative, overcomes this problem. \n\nIn general, the proposed work is very interesting and the idea is neat. \nThe paper is well presented and I want to underline the importance of this. \nThe authors did a very good job presenting the problem, motivation and solution in a coherent fashion and easy to follow. \n\nThe work itself is interesting and can provide useful alternatives for the distribution over the latent space. \n", "Thank you for your observation. We do think this would be an interesting direction for extending our work.", "Dear Reviewer,\nThank you for your positive review.\n", "Dear Reviewer,\nThank you for your positive review.", "We thank the reviewer for their feedback and answer their concerns and requests below.\n\n1. distribution of \\| z^\\prime \\|\nThank you for pointing this out, we made a slight change to our original submission to clarify this point and corrected a minor mistake concerning the degrees of freedom of the Gamma distribution (which incorrectly had an additional factor of 2). Nevertheless, we would like to emphasize that, as claimed in our original submission, the distribution of \\| z^\\prime \\| is indeed a gamma distribution. This can be seen as follows. First recall that for any z_0 and z_1 vectors drawn from a Gaussian distribution, their squared lengths follows a gamma distribution (this is just the definition of a Gamma distribution which models a sum of the squares of independent standard normal random variables). Then consider the average (z_0 + z_1) / 2 discussed in the paper, this average is then again a gaussian vector (since the sum of two independent normally distributed random variables is normal), so its squared length must also be gamma. Note that the same applies if we scale z_i with a factor sqrt(gamma)/||z_i||. We’ve adjusted the method section for these corrections.\n\n2. “Thorough empirical analysis for different dimensionalities would be welcome”\nWe have now added a new section named “Effects of the Latent Space Dimensionality” in the experiments section (we also provide more results in the appendix) where we show examples of straight traversals for GANs trained using different latent dimensionalities. We observe that for low dimensional latent spaces, both the normal and gamma priors produce results where the interior regions seem to produce meaningful samples. However, as the dimensionality grows, the mid-points in the normal-prior GANs quickly degrade, whereas the GANs trained using the gamma prior do not.\n\n3. “Figures 2 and 3 do not add anything to the story”\nWe agree, they merely served to illustrate points that were already made. We have removed the figures.\n\n4. “Trick needs to be motivated better and the experiments improved to really show the improvement of the d-independence of the KL”\nSee answer above, we think the newly added section in the experiments does show a clear improvement in terms of independence to the latent dimension. These results are also in accordance with the theoretical predictions made in the paper and we, therefore, believe that both the theory and experiments do motivate the use of the Gamma prior we advocate in the paper.", "This seems very interesting. A quick question, do you think it would be useful to also learn the rate parameter of your Gamma distribution rather than fixing it? This could be achieved by e.g.\nNaesseth, Ruiz, Linderman, Blei, \"Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms\", 2017." ]
[ 6, 5, 7, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H15odZ-C-", "iclr_2018_H15odZ-C-", "iclr_2018_H15odZ-C-", "ByNZgIKgG", "S16ZxNFgz", "rka1Lw2xf", "BJDXbk5lM", "iclr_2018_H15odZ-C-" ]
iclr_2018_B1X0mzZCW
Fidelity-Weighted Learning
Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality- versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose “fidelity-weighted learning” (FWL), a semi-supervised student- teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.
accepted-poster-papers
This paper introduces a student-teacher method for learning from labels of varying quality (i.e. varying fidelity data). This is an interesting idea which shows promising results. Some further connections to various kinds of semi-supervised and multi-fidelity learning would strengthen the paper, although understandably it is not easy to cover the vast literature, which also spans different scientific domains. One reviewer had a concern about some design decisions that seemed ad-hoc, but at least the authors have intuitively and experimentally justified them.
train
[ "rk-GXLRgz", "H1dQodKgf", "ByHbM4qlG", "SkVyr0imz", "H1fhykJ7G", "rJSfGk17z", "H1-tW1yXf", "B1SWkkk7G", "Bkwb0C0fM", "B1wri00MM", "HkQooCCzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper suggests a simple yet effective approach for learning with weak supervision. This learning scenario involves two datasets, one with clean data (i.e., labeled by the true function) and one with noisy data, collected using a weak source of supervision. The suggested approach assumes a teacher and student networks, and builds the final representation incrementally, by taking into account the \"fidelity\" of the weak label when training the student at the final step. The fidelity score is given by the teacher, after being trained over the clean data, and it's used to build a cost-sensitive loss function for the students. The suggested method seems to work well on several document classification tasks. \n\nOverall, I liked the paper. I would like the authors to consider the following questions - \n\n- Over the last 10 years or so, many different frameworks for learning with weak supervision were suggested (e.g., indirect supervision, distant supervision, response-based, constraint-based, to name a few). First, I'd suggest acknowledging these works and discussing the differences to your work. Second - Is your approach applicable to these frameworks? It would be an interesting to compare to one of those methods (e.g., distant supervision for relation extraction using a knowledge base), and see if by incorporating fidelity score, results improve. \n\n- Can this approach be applied to semi-supervised learning? Is there a reason to assume the fidelity scores computed by the teacher would not improve the student in a self-training framework?\n\n- The paper emphasizes that the teacher uses the student's initial representation, when trained over the clean data. Is it clear that this step in needed? Can you add an additional variant of your framework when the fidelity score are computed by the teacher when trained from scratch? using different architecture than the student?\n \n - I went over the authors comments and I appreciate their efforts to help clarify the issues raised.", "The problem of interest is to train deep neural network models with few labelled training samples. The specific assumption is there is a large pool of unlabelled data, and a heuristic function that can provide label annotations, possibly with varying levels of noises, to those unlabelled data. The adopted learning model is of a student/teacher framework as in privileged learning/knowledge distillation/model compression, and also machine teaching. The student (deep neural network) model will learn from both labelled and unlabelled training data with the labels provided by the teacher (Gaussian process) model. The teacher also supplies an uncertainty estimate to each predicted label. How about the heuristic function? This is used for learning initial feature representation of the student model. Crucially, the teacher model will also rely on these learned features. Labelled data and unlabelled data are therefore lie in the same dimensional space. \n\nSpecific questions to be addressed:\n1)\tClustering of strongly-labelled data points. Thinking about the statement “each an expert on this specific region of data space”, if this is the case, I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points. Each teacher model is trained on a portion of strongly-labelled data, and will only predict similar weakly-labelled data. On a related remark, the nice side-effect is not right as it was emphasized that data points with a high-quality label will be limited. As well, GP models, are quite scalable nowadays (experiments with millions to billions of data points are available in recent NIPS/ICML papers, though, they are all rely on low dimensionality of the feature space for optimizing the inducing point locations). It will be informative to provide results with a single GP model. \n2)\tFrom modifying learning rates to weighting samples. Rather than using uncertainty in label annotation as a multiplicative factor in the learning rate, it is more “intuitive” to use it to modify the sampling procedure of mini-batches (akin to baseline #4); sample with higher probability data points with higher certainty. Here, experimental comparison with, for example, an SVM model that takes into account instance weighting will be informative, and a student model trained with logits (as in knowledge distillation/model compression). \n", "The authors propose an approach for training deep learning models for situation where there is not enough reliable annotated data. This algorithm can be useful because correct annotation of enough cases to train a deep model in many domains is not affordable. The authors propose to combine a huge number of weakly annotated data with a small set of strongly annotated cases to train a model in a student-teacher framework. The authors evaluate their proposed methods on one toy problem and two real-world problems. The paper is well written, easy to follow, and have good experimental study. My main problem with the paper is the lack of enough motivation and justification for the proposed method; the methodology seems pretty ad-hoc to me and there is a need for more experimental study to show how the methodology work. Here are some questions that comes to my mind: (1) Why first building a student model only using the weak data and why not all the data together to train the student model? To me, it seems that the algorithm first tries to learn a good representation for which lots of data is needed and the weak training data can be useful but why not combing with the strong data? (2) What are the sensitivity of the procedure to how weakly the weak data are annotated (this could be studied using both toy example and real-world examples)? (3) The authors explicitly suggest using an unsupervised method (check Baseline no.1) to annotate data weakly? Why not learning the representation using an unsupervised learning method (unsupervised pre training)? This should be at least one of the baselines.\n(4) the idea of using surrogate labels to learn representation is also not new. One example work is \"Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks\". The authors didn't compare their method with this one.", "Here we summarise a list of the changes and additions we made to the revised version of our manuscript. Each part is explained in detail in the responses to each question of each reviewer under the corresponding comment. \n\n- Section 2: providing more intuition and justification on the current setup of FWL in the description of the step#1 of the algorithm\n- Section 2: quick pointer to the details of the clustered GP in the description of the step#2 of the algorithm\n- Section 3: adding one more baseline and its corresponding results and related discussions\n- Section 4: new small subsection: 4.3. Sensitivity of the FWL to the Quality of the Weak Annotator\n- Section 4: new small subsection: 4.4. From Modifying the Learning Rate to Weighted Sampling\n- Section 5: adding some related works\n- Appendix A: more explanation and some experiments backing up the rationale behind clustered GP ", "First of all, we would like to thank the reviewer for the valuable suggestions and comments. Here we respond to the questions one by one:\n\nQ0- My main problem with the paper is the lack of enough motivation and justification for the A0- proposed method; the methodology seems pretty ad-hoc to me and there is a need for more experimental study to show how the methodology work. \n\nA0- As we have discussed in the introduction section of the manuscript, the motivation is making the efficient use of training data to achieve better performance on test data. The situation where training data consists of a small set with good labels and a large set with weak labels is fairly common. For instance, in large scale classification tasks (e.g. ImageNet), a large number of people annotate images via Amazon Mechanical Turk. We can assume the labels generated by AMT are weak because we are not sure whether the distant labelers were concentrated enough or in a good mood or not. In addition, we may have a smaller set of labelers who are experts and concentrated on the annotation task. In this sense, we can consider the labels generated by the second group as strong labels. Our proposed framework can be used in cases where this split of the dataset into {small and strong labels, large and weak labels} is possible.  We tested our framework on NLP and IR tasks because the weak labels can be generated by a well-known heuristic function. However, the weak labels can also be generated by a separate set of mass labelers whose performance is weaker than a small set of strong labelers as was pointed out in the answer to the previous question. As another example in the field of machine vision, we can think of a pre-trained weak classifier which acts based on hand crafted features like SIFT or HoG. This trained classifier can then be used to label a large set of images and assign each image a weak label. In this framework, SIFT-based classifier substitutes the heuristic weak annotator of our paper. The other components and the overall framework remain unchanged. \n\nHere are some questions that comes to my mind:  \nQ1- Why first building a student model only using the weak data and why not all the data together to train the student model? To me, it seems that the algorithm first tries to learn a good representation for which lots of data is needed and the weak training data can be useful but why not combining with the strong data? \n\nA1- We had experiments when both weak and strong data is used to build the representation (let’s call it mixed setup).  For both tasks, we observed no statistically significant difference (based on paired two-tailed t-test) between the performance of mixed setup and the setup proposed in the manuscript where only weak data is used to build the representation. Here are the results of these experiments:\n\nRanking task:\n[Robust04 dataset:  Map=0.3105   / nDCG@20=0.46211]  \n[ClueWeb dataset:  Map=0.1456  / nDCG@20=0.2439] \n\nSentiment Classification task:\n[SemEval-14: F1= 0.7474  ]\n[SemEval-15: F1= 0.6811 ]\n\nWe think the overall scores do not change since the strong data will eventually contribute to the parameter updates of the representation learning layer of the student model in step#3 (in Figure 1.c, the representation layer in the student model benefits from the gradient updates of samples from D_{sw}, which includes data with strong labels as well). Considering the fact that the final scores do not change significantly, we choose the current exposition as it is more generic in the sense that student does not need to see the strong labels in the step#1. This is important especially when weak and strong data are not available together due to, for instance, privacy issues. \n\nQ2- What are the sensitivity of the procedure to how weakly the weak data are annotated (this could be studied using both toy example and real-world examples)? \n\nA2- In the original version of the submission, we have a small experiment in section “4.1 Handling The Bias-Variance Trade-off”,  in which instead of f(x) = 2sinc(x), we use f(x) = x + 1  as a weaker annotator and we observed worse performance in particular for high values of the parameter \\beta. In this experiment,  we actually aimed at studying the effect of parameter \\beta.  We agree that having analysis on the sensitivity of the FWL to the quality of the weak annotation is beneficial, so we added a subsection, “4.3. The sensitivity of the FWL to the Quality of the Weak Annotator”, to the revised version of the submission in which we discussed the performance of FWL on the task of ranking, given four weak annotators with different accuracies.  As it is expected, the performance of FWL depends on the quality of the employed weak annotator. We also observed that the better the performance of the weak annotator was, the less the improvement of FWL over its corresponding weak annotator on test data would be.", "First of all, we would like to thank the reviewer for the valuable suggestions and comments. Here we respond to the questions one by one:\n\nQ1- Clustering of strongly-labelled data points. Thinking about the statement “each an expert on this specific region of data space”, if this is the case, I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points. Each teacher model is trained on a portion of strongly-labelled data, and will only predict similar weakly-labelled data. On a related remark, the nice side-effect is not right as it was emphasized that data points with a high-quality label will be limited. As well, GP models, are quite scalable nowadays (experiments with millions to billions of data points are available in recent NIPS/ICML papers, though, they are all rely on low dimensionality of the feature space for optimizing the inducing point locations). It will be informative to provide results with a single GP model.\n\nA1- Regarding clustered GP: We used Sparse Gaussian Process implemented in GPflow to build our entire implementation in tensorflow. As the respected reviewer has mentioned, the algorithm is scalable in the sense that it is not O(N^3) as original GP is. It introduces inducing points in the data space and defines a variational lower bound for the marginal likelihood. The variational bound can now be optimized by stochastic methods which make the algorithm applicable to large datasets. However, the tightness of the bound depends on the locations which are found through the optimization process. We empirically observed that a single GP does not give a satisfactory accuracy on left-out test dataset. We hypothesized that this can be due to the inability of the algorithm to find good inducing points when only a few of them are available. Then we increased the number of inducing points which trades off the scalability of the algorithm because it scales with O(NM^2) where M is the number of inducing points. We guess this can be due to the observation that our datasets are distributed in a highly sparse way within the high dimensional embedding space. We also tried to cure the problem by means of PCA to reduce input dimension but it did not result in a considerable improvement. Due to this empirical evidence, we clustered the truly labeled dataset and used a separate GP for each cluster. The overall performance of the algorithm improved and we may be able to argue that clustered GP makes use of the data structure roughly close to the idea of KISS-GP[1]. In inducing-point methods (with m inducing points and n training samples), normally it is assumed that m<<n for computational and storage saving. However, we have this intuition that few number of inducing points make the model unable to explore the inherent structure of data. By employing several GPs, we were able to use a large number of total inducing points even m>n which seemingly better exploits the structure of datasets. Because our work was not aimed to be a close investigation of GP, we considered clustered GP as the engineering side of the work which is a tool to give us a measure of confidence. Other tools such as a single GP with inducing points that form a Kronecker or Toeplitz covariance matrix are also conceivable. Therefore, we do not of course claim that we have proposed a new method of inference for GP because of the lack of theoretical reasoning for the use of multiple GPs. In the end, as was asked by the reviewer, the result for single GP is included as a part of clustered GP section in the Appendix A (Detailed description of clustered GP) in the revised manuscript showing the effectiveness of the the initial clustering and local GPs. Moreover, an abstract of this response is also added to the in Appendix A and also main text to enlighten the reason for using clustered GP.\n-----\n[1] Wilson, A. and Nickisch, H., 2015, June. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In International Conference on Machine Learning (pp. 1775-1784).", "\nRegarding the point: “I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points”: In FWL, during training, in step#2, we train multiple GPs as the teacher using only the data with “strong labels” which is a rather small set. In step#3, we go through both data with strong and weak labels and for each data point, we assign each point to a teacher based on the centroid of the teacher’s corresponding cluster. Therefore, each teacher predicts the new label in its territory. The predicted labels are almost the same as the original labels for the strongly labeled points and hopefully better labels for the weakly labeled data points. The confidence for the newly labeled point is also reported by its corresponding GP (teacher). \nClustering the weakly-labeled data points means having multiple student models as well. However, during the recall, we do not want to have multiple students which is computationally and space-wise prohibitive. Having a separate student corresponding to each teacher prevent the makes each student almost blind with respect to other clusters which is not desirable. The single student defined in our framework enables it to have a holistic view of the entire input space. We want our main task to be solved by a single student which is assumed expressive enough. The entire framework is designed to help this single student to settle on a better local optimum enjoying multiple teachers in the distillation framework. One important point here is that in FWL the teacher can be implemented by any predictor that can provide uncertainty for its prediction [1]. Even though, we resort to GP and a small strongly labeled dataset to capture this uncertainty, we should argue that the concept is applicable also when the uncertainty signal is provided from outside the dataset.\n-----\n[1] Anonymous, Deep Neural Networks as Gaussian Processes, under submission at ICLR2018, https://openreview.net/forum?id=B1EA-M-0Z\n\nQ2- From modifying learning rates to weighting samples. Rather than using uncertainty in label annotation as a multiplicative factor in the learning rate, it is more “intuitive” to use it to modify the sampling procedure of mini-batches (akin to baseline #4); sample with higher probability data points with higher certainty. Here, experimental comparison with, for example, an SVM model that takes into account instance weighting will be informative, and a student model trained with logits (as in knowledge distillation/model compression). \n\nA2- We think the suggested comparison is the case mainly when samples are seen by the model with the frequency proportional to the certainty of their label. \nWe designed a new experiment in which, we kept the architectures of the student and the teacher and the procedure of the first two steps of the FWL fixed. We changed the step#3 as follows: For each sample in D_{sw} (dataset consisting of strongly and weakly labeled data points which are relabeled by the teacher and each label is associated with a confidence), we normalize the confidence scores for all training samples and set the normalized score of each sample as its probability to be sampled. Afterwards, we train student model by sampling mini-batches from D_{sw} with respect to the probabilities associated with each sample, without considering their confidence as a multiplicative factor for the learning rate.\nThis means that more confident the teacher is about the generated label for each sample, the more chance that sample has to be seen by the student model. \nWe have added a new subsection, “4.4. From modifying the learning rates to weighted sampling”, to the revised manuscript to report our observations. Based on the results, compared to the original FWL, the performance of FWL with sampling increases rapidly in the beginning but it slows down afterward. We have looked into the sampling procedure and noticed that the confidence scores provided by the teacher form a rather skewed distribution and there is a strong bias toward sampling from data points that are either in or closed to the points in the dataset with strong labels, as GP has less uncertainty around these points and the confidence scores are high. We observed that the performance of the FWL with sampling gets closer to the performance of FWL after many epochs, while FWL had already a long convergence. The skewness of the confidence distribution makes FWL with sampling to have a tendency for more exploitation than exploration, however, FWL has more chance to explore the input space, while it controls the effect of updates on the parameters for samples based on their merit. \nWe believe FWL with sampling can be improved by having a better strategy for sampling from a skewed distribution or using approaches for active learning and selective sampling which is out of the scope of this paper. ", "\nQ3- The authors explicitly suggest using an unsupervised method (check Baseline no.1) to annotate data weakly? Why not learning the representation using an unsupervised learning method (unsupervised pre-training)? This should be at least one of the baselines.\n\nA3- The suggestion in this comment is to learn the representation of the data, in the first step, in an unsupervised manner.  \nWe have already tried this idea for both tasks, i.e. removing the first step and replacing it with learning the representation in an unsupervised (or  self-supervised feature learning) way: In the document ranking task, as the representation of documents and queries, we use weighted averaging over pre-trained embeddings of their words based on their inverse document frequency [1]. In the sentiment analysis task, we use skip-thoughts [2] which tries to estimate representation for sentences by defining a surrogate task in which given a sentence, the goal is to predict the sentence before and after, using autoencoders. We have used these representations as the input for the GP in step#2 and #3.  In both tasks, the performance drops dramatically. There might be two main reasons for that:\n1.\tLearning the representation of the input data downstream of the main task that we are going to solve leads to representations that are better suited to the FWL (in terms of the communications between teacher and student) compared to a task-independent unsupervised way.\n2.\tThe other reason for losing performance by replacing the first step with learning representation in an unsupervised (self-supervised) way is that although the main goal of step#1 is to learn a representation of the data for the given task, we pretrain all the parameters of the student network (not just the representation layer) in that step. So as mentioned in the paper, in step#3 “[...] for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step#1.” In summary, the first step initializes the layers of both embedding network and classification network.\nWe have added the aforementioned experiments as extra baselines (baseline #7, FWL_unsuprep) to the paper and in the results and discussions (section 3.2 and 3.3), we elaborate more on the importance of the FWL setup for step#1.\n------------\n[1] Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. Neural ranking models with weak supervision. In SIGIR’17, 2017.\n[2] Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. NIP2015, 2015.", "\nQ4- the idea of using surrogate labels to learn representation is also not new. One example work is \"Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks\". The authors didn't compare their method with this one.\n\nA4- Thanks for pointing out this paper.  The referred work is based on self-supervised feature learning in which the idea is to exploit different labelings that are freely available besides or within the data by defining a surrogate task which uses the intrinsic signals to learn better (e.g. generic, robust, descriptive and invariant) features. The learned features are then transferred to be used for a supervised task (e.g. object classification or description matching). We argue that in FWL we do not learn representation through a proxy task. We learn the representation (and pretrain the student model) downstream of the main task, but with pseudo-labels (noisy labels).  Nonetheless, we can say that representation learning in step#1 is solving a surrogate task of approximating the expert knowledge, for which a noisy supervision signal is provided by the weak annotator.  \nIn the response to the previous question, we have added a baseline for the sentiment classification task in which a surrogate task is used to learn the representation (see A3).  Furthermore, we discussed the advantage of the current setup of the step#1, i.e. learning the representation downstream of a task same as the final target task that we want to solve but with lower accuracy in the labels. Here we again summarize them and add one more point:\n1.\tUsing the main task with weak labels in step#1 leads to a representation that complies better with the target task.\n2.\tIn the current setup of the step#1, we also pretrain the student model, so representation learning is actually part of student pre-training in a weakly supervised manner (in A3, we explained why this is needed).\n3.\tIn addition to the above points, the definition of the surrogate task in self-supervision depends on the problem to be solved. For instance, if the surrogate task is defined such that it yields features invariant to color, it cannot be used to differentiate objects with different colors. However, in our setup the step#1 is seen as a surrogate task is inherently in accordance with the main task (in fact they are the same, but with different accuracy in the label space) and we do not need to think about the suitable surrogate task for the feature learning phase. \nLooking to the FWL from the perspective of self-supervised feature learning is pretty interesting and valuable to mention. We have added this point (Section 2, where we are explaining step#1) and the related papers (in related work section) to the revised version of the submission to include this point of view as well.", "\nQ5- The paper emphasizes that the teacher uses the student's initial representation when trained over the clean data. Is it clear that this step in needed? Can you add an additional variant of your framework when the fidelity sores are computed by the teacher when trained from scratch? using a different architecture than the student?\n\nA5- In the current model, first the representation of the data is learned by the student using weakly annotated data, then, using the learned representation, we fit the teacher on the data with strong (true) labels. Providing the teacher with the learned representation by the student has three main reasons:\nFirst of all, it has been shown that we can learn effective representation of the data if we have a large quantity of data available, this can be either by learning the distribution of the data using unlabeled example, or learning representation of the data downstream of the main task using a large set of weakly labeled data. However, for many tasks using just a small amount of data with true labels, we will not be able to model the underlying distribution of the data. Since in FWL, the teacher is trained only on data with strong labels (which is a small set), sharing the learned representation of the previous step alleviates this problem and the teacher can enjoy the learned knowledge from the large quantity of the weakly annotated data. \nIn our setup, we make use of a Gaussian Process as the teacher. It is an interesting direction to search for meaningful kernels on structured data, i.e strings in our case. There exist some works to define such non-vectorial kernels that are designed by experts and are domain specific [2,3]. However, our goal here is to learn the representation along with solving the main classification task. Even though sme papers connect Gaussian process and deep neural networks, we are not aware of a reliable method for end to end training to learn the input features of GP better than the features learned by a neural network. So we do not learn the representation of the data as part of the step#2, but borrow it from step#1. Likewise, we do not learn the kernels of the GPs and only learn the vectorial representation of their inputs. \nAs another possible advantage of the current setup, we let the teacher see the data through the student's lens. This may in particular help the teacher, in step#3, to provide better annotation (and confidence) for the training of the student when the teacher is aware of the idea of the student about metric properties of the input space learned in step#1. Note that the input representation of the student is trained in the step#3 and is not fully identical with that of the teacher which is kept fixed. We tested the case where the teacher used the input representation of the student in step#3 but the accuracy dropped considerably. We ascribe this observation to the covariate shift in the input of a trained GP.\n\nWe made these points more explicit in the revised version of the submission (Section 2, where we are explaining step#2). A variant of FWL would be to use one (or more) neural network(s) as the teacher [1] to be able to learn representation in step#2, but we believe it is necessary for the teacher and student to both agree on the metric space they see in the input.\n\n[1] Anonymous, Deep Neural Networks as Gaussian Processes, under submission at ICLR2018, https://openreview.net/forum?id=B1EA-M-0Z\n[2] Eskin E, Weston J, Noble WS, Leslie CS. Mismatch string kernels for SVM protein classification. InAdvances in neural information processing systems 2003 (pp. 1441-1448).\n[3] Gärtner T. A survey of kernels for structured data. ACM SIGKDD Explorations Newsletter. 2003 Jul 1;5(1):49-58", "First of all, we would like to thank the reviewer for the valuable suggestions and comments. Here we respond to the questions one by one:\n\nQ1- Over the last 10 years or so, many different frameworks for learning with weak supervision were suggested. \nFirst, I'd suggest acknowledging these works and discussing the differences to your work.\n\nA1- In the revised manuscript, we’ve included some of the main works in the area of “learning with weak supervision” (some of which had been left out in the original submission due to the page limit) in the related work section and discussed how FWL is related to them. \n\nQ2- Second - Is your approach applicable to these frameworks? It would be an interesting to compare to one of those methods (e.g., distant supervision for relation extraction using a knowledge base), and see if by incorporating fidelity score, results improve.\n\nA2- In general, in order to employ FWL, we need a large set of data with weak labels. These weak labels can be devised using methods like distant supervision, indirect supervision, constraint-based supervision, etc.\nIn our paper, for instance, for the ranking task, we use a weak annotator based on a heuristic function that can be considered as a form of distant supervision, and for the classification task, as the weak annotation, we use labels in the word level to infer labels in the sentence level which can be kind of considered as an indirect supervision approach with a slightly different setup. \nThe interesting question would be how different approaches for providing the weak annotation may affect the performance of FWL, and in a more general perspective, how sensitive is FWL to the quality of the weak annotations. In the original submission, in section “4.1 Handling The Bias-Variance Trade-off”, we included a simple analysis on how employing different weak annotators with different qualities (in terms of accuracy on test data) affects the performance of FWL in the toy problem. Since this point is also raised by one of the other reviewers, we add extra analysis to the revised version for the ranking task, “4.3. The sensitivity of the FWL to the Quality of the Weak Annotator”. The analysis shows that the achieved improvement by FWL over the weak annotator decreases in the presence of a more accurate weak annotator. The reason could be the hypothesis that a good annotator makes better use of data and leaves less room for improvement by the teacher. \n\nQ3- Can this approach be applied to semi-supervised learning? \n\nA3- Yes. In fact, the proposed approach is applicable in the semi-supervised setup, where we can define one or more so-called “weak annotators”, to provide additional (albeit noisy) sources of weak supervision for unlabeled data. This can be done based on heuristics rules, or using a “weaker” or biased classifiers trained on e.g. non-expert crowd-sourced data or data from different domains that are related, or distant supervision where an external knowledge source is employed to devise labels. Providing such weak annotations is possible for a large class of tasks that are considered to be solved in the semi-supervised learning setup. \n\nQ4- Is there a reason to assume the fidelity scores computed by the teacher would not improve the student in a self-training framework?\n\nA4- If we correctly understood, the question here is “In what circumstances does taking the confidence (fidelity) score by FWL into account yield no improvement or even hurt the performance of the student model, while it learns from weakly annotated data?”. \nThis could be an interesting direction that merits more detailed investigation. The failures of applying the confidence score are either estimating a high confidence score for a bad training label (case#1), or estimating a low confidence for a good training label (case#2). From the set of controlled experiments, we have done in particular on the toy problem, we found that probably due to the Bayesian nature of the teacher in FWL, case#1 is less likely to happen compared to case#2. The explanation is that generation of bad labels is mostly due to the lack of enough strong data to fit a good GP. When the number of data points on which the GP is fitted is extremely low, the uncertainty is high almost all over the space leading to low confidences. So, in most cases, bad labels come with fairly low confidence.\nIn our design, case#2 would not happen for samples with strong labels (since GP is fitted on them and the uncertainty is almost zero at those points), however, the teacher might reject good weak examples by assigning a low confidence score to them. This is not a crucial situation as 1. generating extra weak examples is not expensive in our setup, 2. the rejected weak example has already contributed to the parameter updates of the student in the pretraining with its original weak label (step #1). Nonetheless, having lots of case#2 leads to slower convergence of the model during training.\n" ]
[ 7, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1X0mzZCW", "iclr_2018_B1X0mzZCW", "iclr_2018_B1X0mzZCW", "iclr_2018_B1X0mzZCW", "ByHbM4qlG", "H1dQodKgf", "H1dQodKgf", "ByHbM4qlG", "ByHbM4qlG", "rk-GXLRgz", "rk-GXLRgz" ]
iclr_2018_SJzRZ-WCZ
Latent Space Oddity: on the Curvature of Deep Generative Models
Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models.
accepted-poster-papers
This paper characterizes the induced geometry of the latent space of deep generative models. The motivation is established well, such that the paper convincingly discusses the usefulness derived from these insights. For example, the results uncover issues with the currently used methods for variance estimation in deep generative models. The technique invoked to mitigate this issue does feel somehow ad-hoc, but at least it is well motivated. One of the reviewers correctly pointed out that there is limited novelty in the theoretical/methodological aspect. However, I agree with the authors’ rebuttal in that characterizing geometries on stochastic manifolds is much less studied and demonstrated, especially in the deep learning community. Therefore, I believe that this paper will be found useful by readers of the ICLR community, and will stimulate future research.
train
[ "SyC3QhVgf", "r1dsxRSxG", "HJOIiwjlz", "SkdEk8KfM", "rJpn3fVZf", "rk-_6MEZM", "SkM8pzVZf", "HkbMaz4Wz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper investigates the geometry of deep generative models. In particular, it describes the geometry of the latent space when giving it the (stochastic) Riemannian geometry inherited from the embedding in the input space described by the generator function. The authors describe the geometric setting, how distances in the latent space can be interpreted with the non-Euclidean geometry, and how interpolation, probability distributions and random walks can be constructed.\n\nWhile the paper makes a decent presentation of the geometry of the generative setting, it is not novel. It is well known that (under certain conditions) the mapping described by the generator function is a submanifold of the input space. The latent space geometry is nothing but the submanifold geometry the image f(Z) inherits from the Euclidean geometry of X. Here f is the generator mapping f:Z->X. The latent space distances and geodesics corresponds to distances and geodesics on the submanifold f(Z) of X. Except for f being stochastic, the geometry is completely standard. It is not surprising that distances inherited from X are natural since they correspond to the Euclidean length of minimal curves in the input space (and thus the data representation) when restricting to f(Z).\n\nI cannot identify a clear contribution or novelty in the paper which is the basis for my recommendation of rejection of the paper.", "In the paper the authors analyse the latent space generated by the variational autoencoder (VAE). They show that this latent space is imbued by a Riemannian metric and that this metric can be easily computed in terms of mean and variance functions of the corresponding VAE. They also argue that the current variance estimates are poor in regions without data and propose a meaningful variance function instead. In the experiments section the authors evaluate the quality and meaningfulness of the induced Riemannian metric.\n\nThere are minor grammatical errors and the paper would benefit from proofreading.\n\nIn the introduction the authors argue that points from different classes being close to each other is a misinterpretation of the latent space. An argument against would be that such a visualisation is simply bad. A better visualisation would explain the data structure without the need for an additional visualisation of the metric of the latent space.\n\nIn section 2, the multiplication symbol (circle with dot inside) is not defined.\n\nIt is not clear from the paper what the purpose of eq. 7 is, as well as most of the section 3. Only in appendix C, it is mentioned that eq. 7 is solved numerically to compute Riemannian distances, though it is still not clear how exactly this is achieved. I think this point should be emphasized and clarified in section 3.\n\nIn section 4, it says proof for theorem 1 is in appendix B. Appendix B says it proves theorem 2. Unfortunately, it is not clear how good the approximation in eq. 9 is.\n\nIs theorem 1 an original result by the authors? Please emphasize.\n\nIn Fig. 6, why was 7-NN used, instead of k-means, to colour the background?\n\nI think that the result from the theorem 1 is very important, since the estimation of the Riemannian metric is usually very slow. In this regard, it would be very interesting to know what the total computational complexity of the proposed approach is.\n", "The paper makes an important observation: the generating function of a generative model (deep or not) induces a (stochastic) Riemannian metric tensor on the latent space. This metric might be the correct way to measure distances in the latent space, as opposed to the Euclidean distance.\n\nWhile this seems obvious, I had actually always thought of the latent space as \"unfolding\" the data manifold as it exists in the output space. The authors propose a different view which is intriguing; however, they do not, to the best of my understand, give a definitive theoretical reason why the induced Riemannian metric is the correct choice over the Euclidean metric.\n\nThe paper correctly identifies an important problem with the way most deep generative models evaluate variance. However the solution proposed seems ad-hoc and not particularly related to the other parts of the paper. While the proposed variance estimation (using RBF networks) might work in some cases, I would love to see (perhaps in future work) a much more rigorous treatment of the subject.\n\nPros:\n1. Interesting observation and mathematical development of a Riemannian metric on the latent space.\n\n2. Good observation about the different roles of the mean and the variance in determining the geodesics: they tend to avoid areas of high variance.\n\n3. Intriguing experiments and a good effort at visualizing and explaining them. I especially appreciate the interpolation and random walk experiments. These are hard to evaluate objectively, but the results to hint at the phenomena the authors describe when comparing Euclidean to Riemannian metrics in the latent space.\n\nCons:\n1. The part of the paper proposing new variance estimators is ad-hoc and is not experimented with rigorously, comparing it to other methods in terms of calibration for example. \n\nSpecific comments:\n1. To the best of my understanding eq. (2) does not imply that the natural distance in Z is locally adaptive. I think of eq (2) as *defining* a type of distance on Z, that may or may not be natural. One could equally argue that the Euclidean distance on z is natural, and that this distance is then pushed forward by f to some induced distance over X. \n\n2. In the definition of paths \\gamma, shouldn't they be parametrized by arc-length (also known as unit-speed)? How should we think of the curve \\gamma(t^2) for example?\n\n3. In Theorem 2, is the term \"input dimension\" appropriate? Perhaps \"data dimension\" is better?\n\n4. I did not fully understand the role of the LAND model. Is this a model fit AFTER fitting the generative model, and is used to cluster Z like a GMM ? I would appreciate a clarification about the context of this model.", "We have updated the paper with the following two changes:\n\n1) As promised to AnonReviewer1 the paper has been professionally proof-read.\n\n2) AnonReviewer3 asked about the proposed variance function based on RBF networks. We have added an extra experiment to Appendix C which demonstrate that our proposed model (besides improving the latent geometry) improves marginal likelihood of held-out data. While we proposed this variance function to get a well-behaved geometry, this experiment shows that it also generally improves density modeling compared to standard models. ", "We thank all reviewers for their comments. We will reply to these comments separately.", "\"I cannot identify a clear contribution or novelty in the paper which is the basis for my recommendation of rejection of the paper.\"\n\nIt is true that if f:Z->X is a sufficiently well-behaved deterministic function then it is completely standard differential geometric analysis to show that it generates a submanifold of R^D. This is, however, not the point of the present paper. For VAEs (and related models), the generator function f is stochastic and then standard geometric analysis no longer applies (now we get random submanifolds; a topic that is not commonly discussed). A brief summary of our contributions are then:\n\n*) We show how the expected metric of the stochastic generator has a particularly appealing form, where the Jacobian of the generator mean captures the shape of the submanifold, while the Jacobian of the generator variance pushes geodesics towards regions of high data density.\n\n*) We show that standard estimators of generator variance are rather arbitrary and do not make much sense. We provide a simple approach for improving this that works quite well in our settings, such the learned geometry becomes useful.\n\n*) We show that in a deep learning context Riemannian metrics are quite useful for getting a better understanding of the latent space. Furthermore, we demonstrate that knowledge of the geometry improves results on many tasks where latent variable models are commonly used today.\n\nWe would argue that these are all substantial contributions. From the reviewer comments it would appear that our insights were already known; if this is indeed the case, we would appreciate specific pointers to the literature where the stochastic Riemannian metric of deep generative models are discussed. We are not aware of any such previous publications.\n", "\"...An argument against would be that such a visualisation is simply bad....\"\n\nWe agree that this visualization is bad, yet it is ever-present in the literature as it reflect the Euclidean structure that is almost-always imposed on the latent space (e.g. it is common to add and subtract latent vectors for style transfer). We use this visualization merely to show that the Euclidean assumption need not be particularly good.\n\nAs a side-remark, we find that having the volume measure of the metric as a background color (as done in Figs. 3, 5, 7, 8 and 10) helps quite a bit as a simple visualization tool. Chris Bishop's \"magnification factor\" for visualizing the GTM model is identical (except the GTM has constant generator-variance), so there is some experience in the community already with such a visualization.\n\nC. M. Bishop, M. Svensén, and C. K. I. Williams. Magnification factors for the SOM and GTM algorithms. In Proceedings 1997 Workshop on Self-Organizing Maps, Helsinki University of Technology, Finland., pages 333-338, 1997.\n\n\n\"In section 4, it says proof for theorem 1 is in appendix B. Appendix B says it proves theorem 2.\"\n\nWe have fixed the incorrect reference in the appendix; it should indeed state that this is a proof of theorem 1.\n\n\"Unfortunately, it is not clear how good the approximation in eq. 9 is.\"\n\nThe variance of the metric drops as O(1/D); in the limit D->infinity the variance vanishes. We find that for high-dimensional data (e.g. images) the variance is effectively zero, and the approximation is quite good. If the data space is low-dimensional, we expect that the approximation is less well-behaved.\n\n\"Is theorem 1 an original result by the authors?\"\n\nYes, theorem 1 is a novel contribution. However, the derivation of the result is purely mechanical, and we only state it as a theorem to\na) emphasize the result (it is important for the paper), and\nb) make it easier to push its derivation to an appendix.\n\n\"In Fig. 6, why was 7-NN used, instead of k-means, to colour the background?\"\n\nWe wanted the background color of the figure to resemble a \"ground truth\", so it seemed more natural to use a more sensitive model than k-means for this. We're happy to change this, if that is deemed better.\n\n\"it would be very interesting to know what the total computational complexity of the proposed approach is.\"\n\nThe complexity of computing the metric merely amounts to computing Jacobians of the generator; the complexity of this operation depends very-much on the network architecture of the generator. In practice, it can be time consuming as TensorFlow (which we used) does not support Jacobians (only gradients), so theoretical computational complexity does not reflect the runtime of current tools.\n\n\n\"There are minor grammatical errors and the paper would benefit from proofreading.\"\n\nWe will send the paper to an external agency for proof-reading. This usually takes a few days, after which we will update the paper.\n\n\"In section 2, the multiplication symbol (circle with dot inside) is not defined.\"\n\nGood catch; we have fixed this (it is the element-wise product).\n\n\"It is not clear from the paper what the purpose of eq. 7 is, as well as most of the section 3. Only in appendix C, it is mentioned that eq. 7 is solved numerically to compute Riemannian distances, though it is still not clear how exactly this is achieved.\"\n\nFair point. This material is mostly included for completeness as it is otherwise not possible to build a numerical implementation of the proposed models. We solve Eq. 7 numerically using Matlab's 'bvp5c', so the equation translates directly into an implementation of an algorithm for computing geodesics. We have made some changes to the paper to reflect this.", "=== Which metric is \"correct\" ===\n\nIt is not unreasonable to view the latent space as an \"unfolding\" of the data manifold. However, making this unfolding is generally not possible without squeezing and stretching, which introduce significant curvature in the latent space. From that point of view, the \"mean term\" of our proposed metric, captures the squeezing and stretching, while the \"variance term\" captures the inherent uncertainty of the latent space, which appear in regions of low data density (i.e. where the \"unfolding\" is of poor quality due to lack of data). So, our work does not conflict with an \"unfolding\" interpretation of the latent space; rather it reflect the squeezing, stretching and uncertainty needed to unfold.\n\nThe stochastic Riemannian metric is the \"correct choice\" when infinitesimal distances along the data manifold are meaningful in the input space. This is a valid assumption for many data sources, but not necessarily for all. Two remarks for when the assumption is not valid:\n\n 1) Our proposed metric may still be useful as the variance term force shortest paths to go near regions of high data density; this is most-often an useful property.\n\n 2) When the Euclidean distance is not useful infinitesimally in input space, it may be possible to pick another inner product in the input space which is sensible infinitesimally. Our proposed formalism can then be used to pull back this inner product to the latent space.\n\nIn practice, we do not work with the stochastic Riemannian metric, but rather with its expectation. This simplify both mathematics and computations, and can be justified as the distribution of the metric concentrates for high-dimensional data. It is, nonetheless, an approximation.\n\n=== The variance network ===\n\nWe largely agree that the proposed variance estimator is ad hoc. We tried several architectures of the variance network before converging on this rather simple RBF network, which is, so far, the only one that worked reliably well for us.\n\nIt is worth noting that in standard architectures the variance network is only trained in regions where there is data. Consequently, feedforward networks with standard sigmoid-like activations will extrapolate the variance in a somewhat arbitrary manor. We tried several variants of variance networks (both with and without weight-sharing with the mean network), and all had low-quality extrapolations. This resulted in geodesics that did not follow the trend of the data, which defeated our purposes. The images of \"standard\" variance networks in the paper (Figs. 4 and 5) are indeed only anecdotal examples of how standard feedforward networks fare, but they are good representatives of our experiences.\n\nWe're happy to add additional examples of the geometry induced by standard variance networks if so requested. We did not include those as they work quite poorly (practically useless).\n\n\n=== Specific comments ===\n\"...eq. (2) does not imply that the natural distance in Z is locally adaptive...\"\n\nWe agree, and have changed the wording after Eq. 2. We simply meant that the distance changes locally; from Eq. 2 it is indeed not evident that the distance adapt to the data. However, if the generator variance is small in regions of high data density and large otherwise, then Theorem 1 provide a strong hint that the distance measure will indeed locally adapt.\n\n\"One could equally argue that the Euclidean distance on z is natural, and that this distance is then pushed forward by f to some induced distance over X.\"\n\nIn VAEs the latent space is only optimized to ensure that the latent variables approximately follow a unit Gaussian, so here we do not see a particular strong argument for using the Euclidean distance over Z. That being said, we are not trying to argue that our proposed metric is always the best one, merely that it is a natural choice with some very appealing properties.\n\n\"In the definition of paths \\gamma, shouldn't they be parametrized by arc-length (also known as unit-speed)?\"\n\nComputationally, geodesics are found by solving a boundary value problem (Eq. 7) numerically. Here we need to specify a start- and end-time, which we arbitrarily choose as t=0 and t=1, respectively. Then the solution curve is approximately constant-speed, but not unit speed (its speed is scaled by the length of the geodesic).\n\n\"In Theorem 2, is the term \"input dimension\" appropriate? Perhaps \"data dimension\" is better?\"\n\nWe agree and have changed this.\n\n\"I did not fully understand the role of the LAND model. Is this a model fit AFTER fitting the generative model...\"\n\nYes, the LAND is fitted post hoc. This is true for all experiments: we first fit a VAE and then analyze the latent variables according to the implied geometry." ]
[ 3, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJzRZ-WCZ", "iclr_2018_SJzRZ-WCZ", "iclr_2018_SJzRZ-WCZ", "iclr_2018_SJzRZ-WCZ", "iclr_2018_SJzRZ-WCZ", "SyC3QhVgf", "r1dsxRSxG", "HJOIiwjlz" ]
iclr_2018_Hk3ddfWRW
Imitation Learning from Visual Data with Multiple Intentions
Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs. LfD algorithms generally assume learning from single task demonstrations. In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering. Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior. In this paper we present an LfD approach for learning multiple modes of behavior from visual data. Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network. We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module. We demonstrate our method on real robot visual object reaching tasks, and show that it can reliably learn the multiple behavior modes in the demonstration data. Video results are available at https://vimeo.com/240212286/fd401241b9.
accepted-poster-papers
This paper presents a sampling inference method for learning in multi-modal demonstration scenarios. Reference to imitation learning causes some confusion with the IRL domain, where this terminology is usually encountered. Providing a real application to robot reaching, while a relatively simple task in robotics, increases the difficulty and complexity of the demonstration. That makes it impressive, but also difficult to unpick the contributions and reproduce even the first demonstration. It's understandable at a meeting on learning representations that the reviewers wanted to understand why existing methods for learning multi-modal distributions would not work, and get a better understanding of the tradeoffs and limitations of the proposed method. The CVAE comparison added to the appendix during the rebuttal period just pushed this paper over the bar. The demonstration is simplified, so much easier to reproduce, making it more feasible others will attempt to reproduce the claims made here.
train
[ "ryU5B1zxf", "r1cOWGdgz", "r1ET9Ncgf", "S1PKgkIGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The authors propose a new sampling based approach for inference in latent variable models. They apply this approach to multi-modal (several \"intentions\") imitation learning and demonstrate for a real visual robotics task that the proposed framework works better than deterministic neural networks and stochastic neural networks. \n\nThe proposed objective is based upon sampling from the latent prior and truncating to the largest alpha-percentile likelihood values sampled. The scheme is motivated by the fact that this estimator has a lower variance than pure sampling from the prior. The objective to be maximized is a lower bound to 1/alpha * the likelihood. \n\nQuality: The empirical results (including a video of an actual robotic arm system performing the task) looks good. This reviewer is a bit sceptical to the methodology. I am not convinced that the proposed bound will have low enough variance. It is mentioned in a footnote that variational autoencoders were tested but that they failed. Since the variational bound has much better sampling properties (due to recognition network, reparameterization trick and bounding to get log likelihoods instead of likelihoods) it is hard to believe that it is harder to get to work than the proposed framework. Also, the recently proposed continuous relaxation of random variables seemed relevant. \n\nClarity: The paper is fairly clearly written but there are many steps of engineering that somewhat dilutes the methodological contribution.\n\nSignificance: Hard to say. New method proposed and shown to work well in one case. Too early to tell about significance.\n\nPro:\n1. Challenging and relevant problem solved better than other approaches.\n2. New latent variable model bound that might work better than classic approaches.\nCon:\n1. Not entirely convincing that it should work better than already existing methods.\n2. Missing some investigation of the properties of the estimator on simple problem to be compared to standard methods. ", "The authors provide a method for learning from demonstrations where several modalities of the same task are given. The authors argue that in the case where several demonstrations exists and a deterministic (i.e., regular network) is given, the network learns some average policy from the demonstrations.\n\nThe paper begins with the authors stating the motivation and problem of how to program robots to do a task based only on demonstrations rather on explicit modeling or programming. They put the this specific work in the right context of imitation learning and IRL. Afterward, the authors argue that deterministic network cannot adequately several modalities. The authors cover in Section 2 related topics, and indeed the relevant literature includes behavioral cloning, IRL , Imitation learning, GAIL, and VAEs. I find that recent paper by Tamar et al 2016. on Value Iteration Networks is highly relevant to this work: the authors there learn similar tasks (i.e., similar modalities) using the same network. Even the control task is very similar to the current proposed task in this paper.\n\nThe authors argue that their contribution is 3-fold: (1) does not require robot rollouts, (2) does not require label for a task, (3) work within raw image inputs. Again, Tamar et al. 2016 deals with this 3 points.\n\nI went over the math. It seems right and valid. Indeed, SNN is a good choice for adding (Bayesian) context to a task. Also, I see the advantage of referring only to the \"good\" quantiles when needed. It is indeed a good method for dealing with the variance. \n\nI must say that I was impressed with the authors making the robot succeed in the tasks in hand (although reaching to an object is fairly simple task). \n\nMy concerns are as follows:\n1) Seems like that the given trajectories are naturally divided with different tasks, i.e., a single trajectory consists only a single task. For me, this is not the pain point in this tasks. the pain point is knowing when tasks are begin and end. \n2) I'm not sure, and I haven't seen evidence in the paper (or other references) that SNN is the only (optimal?) method for this context. Why not adding (non Bayesian) context (not label) to the task will not work as well? \n3) the robot task is impressive. but proving the point, and for the ease of comparing to different tasks, and since we want to show the validity of the work on more than 200 trials, isn't showing the task on some simulation is better for understanding the different regimes that this method has advantage? I know how hard is to make robotic tasks work... \n4) I’m not sure that the comparison of the suggested architecture to one without any underlying additional variable Z or context (i.e., non-Bayesian setup) is fair. \"Vanilla\" NN indeed may fail miserably . So, the comparison should be to any other work that can deal with \"similar environment but different details\".\n\nTo summarize, I like the work and I can see clearly the motivation. But I think some more work is needed in this work: comparing to the right current state of the art, and show that in principal (by demonstrating on other simpler simulations domains) that this method is better than other methods. \n\n", "This paper focuses on imitation learning with intentions sampled \nfrom a multi-modal distribution. The papers encode the mode as a hidden \nvariable in a stochastic neural network and suggest stepping around posterior \ninference over this hidden variable (which is generally required to \ndo efficient maximum likelihood) with a biased importance \nsampling estimator. Lastly, they incorporate attention for large visual inputs. \n\nThe unimodal claim for distribution without randomness is weak. The distribution \ncould be replaced with a normalizing flow. The use of a latent variable \nin this setting makes intuitive sense, but I don't think multimodality motivates it.\n\nMoreover, it really felt like the biased importance sampling approach should be \ncompared to a formal inference scheme. I can see how it adds value over sampling \nfrom the prior, but it's unclear if it has value over a modern approximate inference \nscheme like a black box variational inference algorithm or stochastic gradient MCMC.\n\nHow important is using the pretrained weights from the deterministic RNN?\n\nFinally, I'd also be curious about how much added value you get from having \naccess to extra rollouts.\n", "We thank the reviewers for the thoughtful comments.\n\nThe paper has been updated with additional simulation experiments.\n\nWe start by describing the additional experiments, and then address each reviewer separately. \n\nFollowing the reviewers suggestions, we include results that compare our approach to a state-of-the-art conditional VAE on a simulated domain. These results were omitted in our initial submission with the interest of keeping the paper at the suggested page limits. We briefly summarize the results here, see Appendix D for more details.\n\nThe experiments were conducted on a simple simulated domain: given an image with N randomly positioned targets with different colors, predict the location (i.e., x-y position) of one of them. For training, we randomly selected one of the targets and provided its location as the supervisory signal. \nThis task captures the essence of the robotic task in the paper - image input and a low dimensional multi-modal output (with N modes). It simplifies the image processing, and the fact that there is no trajectory - it’s a single step decision making problem. \n\nTo make the comparison fair, we chose the latent variable z in IDS to be a standard Gaussian, just as for the CVAE. All network sizes and training parameters were the same for both methods (except for the additional recognition and conditional prior network for CVAE), and we did not apply any pretraining to the conv layer.\n\nWe have tried various CVAE parameter settings, and also annealing of the KL term in the cost. The CVAE works well for N=2 targets, and with careful tuning also for N=3, but despite genuine efforts we could not get it to work for N=5 targets. These results actually motivated us to follow the IDS approach in the first place, which worked well and robustly for all values of N we tried. The convergence of IDS in all cases was also an order of magnitude faster. \n\nThese results show that:\n1) In some domains our IDS algorithm works significantly better than state of the art algorithms for variational inference.\n2) Pretraining is not required for our approach (though it definitely helps speed it up).\n\nWhile it could definitely be the case that with more parameter tuning, or that by applying other improvements to CVAEs such as normalizing flows we could make them work in this task, we believe that the simplicity of our approach and its robust performance is worth reporting. \n\nA similar result was recently reported by Fragkiadaki et al. (2017), comparing CVAEs to backpropping through top-k samples in video prediction. Our contribution, compared to that work, is grounding this method in a formal mathematical treatment, proposing optimistic sampling which significantly improves its performance, and showing its importance in a real world robotic imitation learning domain.\n\nReferences:\nFragkiadaki, Katerina, et al. \"Motion Prediction Under Multimodality with Conditional Stochastic Networks.\" arXiv preprint arXiv:1705.02082 (2017).\n\n\n\nAnonReviewer1:\n\nComparison to value iteration networks (VIN): \nThe VIN work does not consider multiple modes in the data, which is the main focus in our work. In particular, the target position in the VIN paper is *explicitly provided* as a separate image channel of the input, and the VIN output is deterministic - it cannot reproduce multiple modes of reaching to different targets. Thus, VINs cannot solve the problems we tackle in this paper.\n\nExtending VINs with latent variables or using VINs inside a generative model is an interesting direction, but one that would require a separate investigation.\n\nWe believe our related work section covers most relevant works on imitation learning with multiple intentions/modes in the data.\n\nAnswers to specific comments:\n1) In our setting (and in many realistic industrial setting) knowing when the demonstrations start and end is trivial, as the demonstrator records demonstrations sequentially.\n2) Adding context would require to either label the context or infer it. Labelling adds burden on the demonstrator, which we wish to minimize. Inferring the context is the approach we pursue, and we added additional experiments comparing our approach to a state of the art variational inference method.\n3+4) See above for additional simulation results.\n\nAnonReviewer2:\nSee above - we added a comparison with conditional VAEs. \n\nAnonReviewer3:\n\nPretrained weights - see above. Pretraining is not necessary, but definitely helps speed up training. \n\nExtra rollouts: We did not fully understand this comment. Generally rollouts are better understood in the context of an RL setting, however our approach is not RL and thus no rollout is involved. While extra RL rollouts can be used to improve the policy, in many realistic scenarios taking extra rollouts on the robot can be costly/unsafe/time consuming.\n" ]
[ 6, 4, 6, -1 ]
[ 4, 3, 4, -1 ]
[ "iclr_2018_Hk3ddfWRW", "iclr_2018_Hk3ddfWRW", "iclr_2018_Hk3ddfWRW", "iclr_2018_Hk3ddfWRW" ]
iclr_2018_H1zriGeCZ
Hyperparameter optimization: a spectral approach
We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm --- an iterative application of compressed sensing techniques for orthogonal polynomials --- requires only uniform sampling of the hyperparameters and is thus easily parallelizable. Experiments for training deep neural networks on Cifar-10 show that compared to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds significantly improved solutions, in some cases better than what is attainable by hand-tuning. In terms of overall running time (i.e., time required to sample various settings of hyperparameters plus additional computation time), we are at least an order of magnitude faster than Hyperband and Bayesian Optimization. We also outperform Random Search 8×. Our method is inspired by provably-efficient algorithms for learning decision trees using the discrete Fourier transform. We obtain improved sample-complexty bounds for learning decision trees while matching state-of-the-art bounds on running time (polynomial and quasipolynomial, respectively).
accepted-poster-papers
This paper introduces an algorithm for optimization of discrete hyperparameters based on compressed sensing, and compares against standard gradient-free optimization approaches. As the reviewers point out, the provable guarantees (as is usually the case) don't quite make it to the main results section, but are still refreshing to see in hyperparameter optimization. The method itself is relatively simple compared to full-featured Bayesopt (spearmint), although not as widely applicable.
train
[ "H1nRveigz", "SyM469sgf", "Syx3D46ez", "S1cQU63Zz", "S1BiH6nWf", "HyAQBa2Zz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper looks at the problem of optimizing hyperparameters under the assumption that the unknown function can be approximated by a sparse and low degree polynomial in the Fourier basis. The main result is that the approximate minimization can be performed over the boolean hypercube where the number of evaluations is linear in the sparsity parameter. \n\nIn the presented experiments, the new spectral method outperforms the tool based on the Bayesian optimization, technique based on MAB and random search. Their result also has an application in learning decision trees where it significantly improves the sample complexity bound.\n\nThe main theoretical result, i.e., the improvement in the sample complexity when learning decision trees, looks very strong. However, I find this result to be out of the context with the main theme of the paper. \n\nI find it highly unlikely that a person interested in using Harmonica to find the right hyperparamters for her deep network would also be interested in provable learning of decision trees in quasi-polynomial time along with a polynomial sample complexity. Also the theoretical results are developed for Harmonica-1 while Harmonica-q is the main method used in the experiments.\n\nWhen it comes to the experiments only one real-world experiment is present. It is hard to conclude which method is better based on a single real-world experiment. Moreover, the plots are not very intuitive, i.e., one would expect that Random Search takes the smallest amount of time. I guess the authors are plotting the running time that also includes the time needed to evaluate different configurations. If this is the case, some configurations could easily require more time to evaluate than the others. It would be useful to plot the total number of function evaluations for each of the methods next to the presented plots.\n\nIt is not clear what is the stopping criterion for each of the methods used in the experiments. One weakness of Harmonica is that it has 6 hyperparameters itself to be tuned. It would be great to see how Harmonica compares with some of the High-dimensional Bayesian optimization methods. \n\nFew more questions:\n\nWhich problem does Harmonica-q solves that is present in Harmonica-1, and what is the intuition behind the fact that it achieves better empirical results?\n\nHow do you find best t minimizers of g_i in line 4 of Algorithm 3?\n", "- algorithm 1 has a lot of problem specific hyperparametes that may be difficult to get right. Not clear how important they are\n- they analyze the simpler (analytically and likely computationally) Boolean hyperparameter case (each hyperparameter is binary). Not a realistic setting. In their experiments they use these binary parameter spaces so I'm not sure how much I buy that it is straightforward to use continuous valued polynomials. \n- interesting idea but I think it's more theoretical than practical. Feels like a hammer in need of a nail. ", "The paper is about hyperparameter optimization, which is an important problem in deep learning due to the large number of hyperparameters in contemporary model architectures and optimization algorithms.\n\nAt a high-level, hyperparameter optimization (for the challenging case of discrete variables) can be seen as a black-box optimization problem where we have only access to a function evaluation oracle (but no gradients etc.). In the entirely unstructured case, there are strong lower bounds with an exponential dependence on the number of hyperparameters. In order to sidestep these impossibility results, the current paper assumes structure in the unknown function mapping hyperparameters to classification accuracy. In particular, the authors assume that the function admits a representation as a sparse and low-degree polynomial. While the authors do not empirically validate whether this is a good model of the unknown function, it appears to be a reasonable assumption (the authors *do* empirically validate their overall approach).\n\nBased on the sparse and low-degree assumption, the paper introduces a new algorithm (called Harmonica) for hyperparameter optimization. The main idea is to leverage results from compressed sensing in order to recover the sparse and low-degree function from a small number of measurements (i.e., function evaluations). The authors derive relevant sample complexity results for their approach. Moreover, the method also yields new algorithms for learning decision trees.\n\nIn addition to the theoretical results , the authors conduct a detailed study of their algorithm on CIFAR10. They compare to relevant recent work in hyperparameter optimization (Bayesian optimization, random search, bandit algorithms) and find that their method significantly improves over prior work. The best parameters found by Harmonica improve over the hand-tuned results for their \"base architecture\" (ResNets).\n\nOverall, I find the main idea of the paper very interesting and well executed, both on the theoretical and empirical side. Hence I strongly recommend accepting this paper.\n\n\nSmall comments and questions:\n\n1. It would be interesting to see how close the hyperparameter function is to a low-degree and sparse polynomial (e.g., MSE of the best fit).\n\n2. A comparison without dummy parameters would be interesting to investigate the performance differences between the algorithms in a lower-dimensional problem.\n\n3. The current paper does not mention the related work on hyperparameter optimization using reinforcement learning techniques (e.g., Zoph & Le, ICLR 2017). While it might be hard to compare to this approach directly in experiments, it would still be good to mention this work and discuss how it relates to the current paper.\n\n4. Did the authors tune the hyperparameters directly using the CIFAR10 test accuracy? Would it make sense to use a slightly smaller training set and to hold out say 5k images for hyperparameter evaluation before making the final accuracy evaluation on the test set? The current approach could be prone to overfitting.\n\n5. While random search does not explicitly exploit any structure in the unknown function, it can still implicitly utilize smoothness or other benign properties of the hyperparameter space. It might be worth adding this in the discussion of the related work.\n\n6. Algorithm 1: Why is the argmin for g_i (what does the index i refer to)?\n\n7. Why does PSR truncate the indices in alpha? At least in \"standard\" compressed sensing, the Lasso also has recovery guarantees without truncation (and empirically works sometimes better without).\n\n9. Definition 3: Should C be a class of functions mapping {-1, 1}^n to R? (Note the superscript.)\n\n10. On Page 3 we assume that K = 1, but Theorem 6 still maintains a dependence on K. It might be cleaner to either treat the general K case throughout, or state the theorem for K = 1.\n\n11. On CIFAR10, the best hyperparameters do not improve over the state of the art with other models (e.g., a wide ResNet). It could be interesting to run Harmonica in the regime where it might improve over the best known models for CIFAR10.\n\n12. Similarly, it would be interesting to see whether the hyperparameters identified by Harmonica carry over to give better performance on ImageNet. The authors claim in C.3 that the hyperparameters identified by Harmonica generalize from small networks to large networks. Testing whether the hyperparameters also generalize from a smaller to a larger dataset would be relevant as well.", "Thank you for your summary and comments!\n\n1. A practitioner can certainly safely ignore the theory and use Harmonica in practice. The practitioner may, however, be interested to know that our approach is principled and comes with some provable guarantees (or may simply wonder where the approach came from). As such, we think describing the relationship to decision-tree learning is valuable. \n\n2. Re experiments: we find the CIFAR10 dataset to be a challenging one and representative of training deep neural networks (certainly it is the most intensely studied). Finding settings better than hand-tuning indicates promise with our general approach. There is always room for much more experimentation. We hope others will build on this new approach. \n\n3. stopping criterion: we allowed all algorithms (except Harmonica) to run for at least 17 days. Spearmint did not finish before the submission deadline (we will update the results). Subsequently, we have found it to run 2-3x slower than Harmonica and produce worse hyperparameter settings. \n\n4. Harmonica-q vs Harmonica-1: “q” is a heuristic that gives improved performance in practice. The intuition is that there are few important variables in the objective, and after fixing them to the optimal value, this holds recursively - again there are few important variables (of 2nd order), and so on…\n\n5. minimizers of g_i in line 4 of Algorithm 3: this is a great question. \nWe do it by enumerating all the possibilities. Here is where the assumptions come in: since we have a k-sparse degree-d polynomial, there are only 2^{k d} options, and this is manageable in this setting. \n", "Thank you for your comments!\n\n1. Continuous vs. Boolean: the Boolean setting is actually without loss of generality because we can search over a continuous range via binary search on discrete variables. This seems to work well in practice.\n\n2. Our theory works for any domain; the discrete non-Boolean case is handled just the same.\n\n3. The number of hyperparameters for Harmonica is significantly lower than the input number (6 vs. 60, or any input #) and manageable by grid search. We have also found that Harmonica is stable with respect to these six hyperparameters.\n\n4. “hammer in need of a nail” suggests a complicated algorithm. This is far from the truth - the algorithm is very simple - just run LASSO on uniformly sampled measurements over the Fourier representation of the objective (and recurse). This is arguably simpler than Bayesian Optimization, or reinforcement learning based approaches, which require sophisticated updating and handling of prior distributions.\n", "Thank you for your summary and comments! Answers to your questions:\n\n1. Great suggestion. We have shown that the function does fit a low-degree polynomial by merit of optimization, but an MSE test is a good idea, and we’ll do that.\n\n2. In fact, besides Spearmint (which runs without dummy parameters in our experiment) and Harmonica, other algorithms like random search, successive halving or hyperband will have exactly the same performance with/without dummy variables as they are based on random search in the parameter space. Therefore, by removing the dummy variables, only Harmonica might give better performance while the others will stay the same. \nSo in short, the experiment is *non-favorable* to Harmonica with respect to dummy variables, showing its robustness. \n\n3. Certainly, we will add discussion about this paper. The difficulty from comparing comes from the fact that the RL approach is inherently sequential, needing more information to proceed. Our approach is also based on a different assumption (sparse low degree polynomial). \n\n4. We did not try this because our main goal was to try to do an apples to apples comparison of hyperparameter settings found by other algorithms on the entire training set (and indeed we found some that are even better than hand-tuning as in Figure 2).\n\n5. Right, we’ll add discussion.\n\n6. Typo, thanks! \n\n7. This is simply a heuristic we tried, but it is definitely worth investigating no truncation (which we didn’t investigate enough). \n\n9. Typo, yes!\n\n10. Yes, we will remove the Ks.\n\n11. When we started the project (around Sep, 2016), resnet was considered to be a pretty good model, but now maybe densenet is better. Note that in Resnet, Harmonica does do better than best hand-tuned model, we hope same is true for densenet. \n\n12. We did not have enough resources to do hyperparameter tuning for Imagenet, but intend to try this idea for CIFAR100 with densenet (i.e., using a subset of the data first). \n" ]
[ 6, 6, 9, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1 ]
[ "iclr_2018_H1zriGeCZ", "iclr_2018_H1zriGeCZ", "iclr_2018_H1zriGeCZ", "H1nRveigz", "SyM469sgf", "Syx3D46ez" ]
iclr_2018_H1Xw62kRZ
Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Program synthesis is the task of automatically generating a program consistent with a specification. Recent years have seen proposal of a number of neural approaches for program synthesis, many of which adopt a sequence generation paradigm similar to neural machine translation, in which sequence-to-sequence models are trained to maximize the likelihood of known reference programs. While achieving impressive results, this strategy has two key limitations. First, it ignores Program Aliasing: the fact that many different programs may satisfy a given specification (especially with incomplete specifications such as a few input-output examples). By maximizing the likelihood of only a single reference program, it penalizes many semantically correct programs, which can adversely affect the synthesizer performance. Second, this strategy overlooks the fact that programs have a strict syntax that can be efficiently checked. To address the first limitation, we perform reinforcement learning on top of a supervised model with an objective that explicitly maximizes the likelihood of generating semantically correct programs. For addressing the second limitation, we introduce a training procedure that directly maximizes the probability of generating syntactically correct programs that fulfill the specification. We show that our contributions lead to improved accuracy of the models, especially in cases where the training data is limited.
accepted-poster-papers
Below is a summary of the pros and cons of the proposed paper: Pros: * Proposes a novel method to tune program synthesizers to generate correct programs and prune search space, leading to better and more efficient synthesis * Shows small but substantial gains on a standard benchmark Cons: * Reviewers and commenters cited a few clarity issues, although these have mostly been resolved * Lack of empirical comparison with relevant previous work (e.g. Parisotto et al.) makes it hard to determine their relative merit Overall, this seems to be a solid, well-evaluated contribution and seems to me to warrant a poster presentation. Also, just a few notes from the area chair to potentially make the final version better: The proposed method is certainly different from the method of Parisotto et al., but it is attempting to solve the same problem: the lack of consideration of the grammar in neural program synthesis models. The relative merit is stated to be that the proposed method can be used when there is no grammar specification, but the model of Parisotto et al. also learns expansion rules from data, so no explicit grammar specification is necessary (as long as a parser exists, which is presumably necessary to perform the syntax checking that is core to the proposed method). It would have been ideal to see an empirical comparison between the two methods, but this is obviously a lot of work. It would be nice to have the method acknowledged more prominently in the description, perhaps in the introduction, however. It is nice to see a head-nod to Guu et al.'s work on semantic parsing (as semantic parsing from natural language is also highly relevant). There is obviously a lot of work on generating structured representations from natural lanugage, and the following two might be particularly relevant given their focus on grammar-based formalisms for code synthesis from natural language: * "A Syntactic Neural Model for General-purpose Code Generation" Yin and Neubig ACL 2017. * "Abstract Syntax Networks for Code Generation and Semantic Parsing" Rabinovich et al. ACL 2017
train
[ "Hk4_Jw9xG", "H1JSNUjeG", "HkcxQ4Rxf", "SyzhcVbXf", "S1UnKVWXf", "ByKvFVW7f", "r1GbKVb7G", "Sy8bJjuMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The authors consider the task of program synthesis in the Karel DSL. Their innovations are to use reinforcement learning to guide sequential generation of tokes towards a high reward output, incorporate syntax checking into the synthesis procedure to prune syntactically invalid programs. Finally they learn a model that predicts correctness of syntax in absence of a syntax checker. \n\nWhile the results in this paper look good, I found many aspects of the exposition difficult to follow. In section 4, the authors define objectives, but do not clearly describe how these objectives are optimized, instead relying on the read to infer from context how REINFORCE and beam search are applied. I was not able to understand whether syntactic corrected is enforce by way of the reward introduced in section 4, or by way of the conditioning introduced in section 5.1. Discussion of the experimental results coould similarly be clearer. The best method very clearly depends on the taks and the amount of available data, but I found it difficult to extract an intuition for which method works best in which setting and why. \n\nOn the whole this seems like a promising paper. That said, I think the authors would need to convincingly address issues of clarity in order for this to appear. \n\nSpecific comments \n\n- Figure 2 is too small \n\n- Equation 8 is confusing in that it defines a Monte Carlo estimate of the expected reward, rather than an estimator of the gradient of the expected reward (which is what REINFORCE is). \n\n- It is not clear the how beam search is carried out. In equation (10) there appear to be two problems. The first is that the index i appears twice (once in i=1..N and once in i \\in 1..C), the second is that λ_r refers to an index that does not appear. More generally, beam search is normally an algorithm where at each search depth, the set of candidate paths is pruned according to some heuristic. What is the heuristic here? Is syntax checking used at each step of token generation, or something along these lines? \n \n- What is the value of the learned syntax in section 5.2? Presumaly we need a large corpus of syntax-checked training examples to learn this model, which means that, in practice, we still need to have a syntax-checker available, do we not?", "The paper presents a reinforcement learning-based approach for program synthesis. The proposed approach claims two advantages over a baseline maximum likelihood estimation-based approach. MLE-based methods penalize syntactically different but semantically equivalent programs. Further, typical program synthesis approaches don't explicitly learn to produce correct syntax. The proposed approach uses a syntax-checker to limit the next-token distribution to syntactically-valid tokens.\n\nThe approach, and its constituent contributions, i.e. of using RL for program synthesis, and limiting to syntactically valid programs, are novel. Although both the contributions are fairly obvious, there is of course merit in empirically validating these ideas.\n\nThe paper presents comparisons with baseline methods. The improvements over the baseline methods is small but substantial, and enough experimental details are provided to reproduce the results. However, there is no comparison with other approaches in the literature. The authors claim to improve the state-of-the-art, but fail to mention and compare with the state-of-the-art, such as [1]. I do find it hard to trust papers which do not compare with results from other papers.\n\nPros:\n1. Well-written paper, with clear contributions.\n2. Good empirical evaluation with ablations.\n\nCons:\n1. No SOTA comparison.\n2. Only one task / No real-world task, such as Excel Flashfill.\n\n[1]: \"Neural Program Meta-Induction\", Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli", "This is a nice paper. It makes novel contributions to neural program synthesis by (a) using RL to tune neural program synthesizers such that they can generate a wider variety of correct programs and (b) using a syntax checker (or a learned approximation thereof) to prevent the synthesizer from outputting any syntactically-invalid programs, thus pruning the search space. In experiments, the proposed method synthesizes correct Karel programs (non-trivial programs involving loops and conditionals) more frequently than synthesizers trained using only maximum likelihood supervised training.\n\nI have a few minor questions and requests for clarification, but overall the paper presents strong results and, I believe, should be accepted.\n\n\nSpecific comments/questions follow:\n\n\nFigure 2 is too small. It would be much more helpful (and easier to read) if it were enlarged to take the full page width.\n\nPage 7: \"In the supervised setting...\" This suggests that the syntaxLSTM can be trained without supervision in the form of known valid programs, a possibility which might not have occurred to me without this little aside. If that is indeed the case, that's a surprising and interesting result that deserves having more attention called to it (I appreciated the analysis in the results section to this effect, but you could call attention to this sooner, here on page 7).\n\nIs the \"Karel DSL\" in your experiments the full Karel language, or a subset designed for the paper?\n\nFor the versions of the model that use beam search, what beam width was used? Do the results reported in e.g. Table 1 change as a function of beam width, and if so, how? \n", "We thank the commenter for their interest in our paper.\n\nMore details about the generation of the dataset are available in a previous paper that was making use of the Karel Dataset [1]. We have already released the karel dataset (link omitted because of double-blind constraints) and also plan on releasing the code used to run the experiments. What follows are the answers to the specific questions:\n\n1, 2, 3 -> The input grids are generated by randomly sampling for each cell whether the cell contains an obstacle / a marker / several markers. The agent’s position is also selected at random.\nA program is then sampled with a maximum nesting depth (nested loops or conditionals) of 4 and a maximum number of tokens that is set to be 20. We execute the programs on the inputs grids to generate the output grids. If the program hasn’t halted before performing 200 actions, if there is a collision with an obstacle or if the program doesn’t do anything when run on the sampled grid (i.e. the input grid is unchanged), we discard the program and sample a new one.\n\n4 -> The 52 tokens are: <s> (start of sequence), not, DEF, run, REPEAT, WHILE, IF, IFELSE, ELSE, markersPresent, noMarkersPresent, leftIsClear, rightIsClear, frontIsClear, move, turnLeft, turnRight, pickMarker, putMarker, m(, m) (open and close parens for a function), c(, c) (open and close parens for a conditional), r(, r) (open and close parens for a repeat instruction), w(, w) (open and close parens for a while conditional), i(,i) (open and close parens for a if statement conditional), e(, e) (open and close parens for an else clause) + 20 scalar values from 0 to 19.\n\n5 -> The batch size used for the supervised setting was 128. For the RL type experiments, a batch was composed of 16 samples. We used 100 rollouts per samples for the Reinforce method and a beam size of 64 for methods based on the beam search.\n\n6-> We have added more details in the paper regarding how the beam search is performed.\n\n[1] Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli. Neural Program Meta-Induction. In NIPS, 2017\n", "We thank the reviewer for the comments on the part of the paper that need clarification. We incorporated his feedback in the new version. Here are answers to the raised questions:\n\n-> Figure 2 is too small\nThe size of Figure 2 was increased to make it full page width.\n\n-> “The syntaxLSTM can be trained without supervision in the form of known valid programs”\nWhile the syntaxLSTM can be trained without access to known valid programs when RL-based training is employed (because gradient will flow to it through the softmax defining the probability of each token), we point out in the experiments section that we weren’t successful in training models using only RL training. As a result, it would not be accurate to claim that it can be trained without any valid programs as supervision. \n\n-> Is the Karel DSL the full Karel language?\nThe exact description of our DSL is in Appendix B. It doesn’t exactly match the full Karel language, as most notably there is no possibility to define subroutines. We made this clearer in the paper.\n\n-> What was the beam width used?\nAll of our experiments used a beam width of 64. We didn’t study the effect of this hyperparameter and chose it as the maximum width we could afford based on available GPU memory. In the limit, using an extremely large beam size would be equivalent to computing the complete sum for the expected reward but this is not feasible for any applications where program would be longer than a few tokens.\n", "We thank the reviewer for the comments on the paper, and for pointing out the missing related work.\n\nIt is difficult to perform an exact comparison between the two papers as they are solving two different problems. The model developed in [1] (Devlin et al, 2017) performs program induction: i.e. it produces the output world on a new input world where the desired program semantics are encoded in the network itself. On the other hand, in our case, we perform program synthesis, i.e. generate a program in the Karel DSL that performs the desired transformation from input to output. \n \nUsing the terminology of Devlin et al., what we describe is closest to the meta-induction approach: strong cross task knowledge sharing but no task specific learning. Overall, our MLE baseline architecture will correspond to the Devlin et al. meta-induction architecture if the decoder was trained to generate program tokens instead of output worlds. This precision was added to the paper\n\n-> No real-world task such as FlashFill?\nThe FlashFill DSL considered in previous neural program synthesis work such as RobustFill is essentially a functional language comprising of compositions of a sequence of functions. In this work, we wanted to increase the complexity of the DSL one step further to better understand what neural architectures are more appropriate for learning programs with such complexity. Concretely, the Karel DSL consists of complex control-flow such as nested loops and conditionals, which are not present in the FlashFill DSL. The difference of performance of meta-induction on FlashFill (~70% from Figure 7 of [2]) vs. KarelDSL (~40% from Figure 4 of [1]) points towards Karel being a more complex dataset.\n\n Learning Karel programs can also be considered close to a real-world task as this language is used to teach introductory programming to Stanford students, and the program synthesis models can be used to help students if they are having difficulty in writing correct programs.\n\n\n[1] Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli. Neural Program Meta-Induction. In NIPS, 2017\n[2] Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Robustfill: Neural program learning under noisy I/O. In ICML, 2017 \n", "We thank the reviewer for the detailed comments on how to improve the exposition of our paper, which we included in the revised version.\n\n- Figure 2’s size was increased to make the model clearer.\n\n- Reinforce vs. Monte Carlo estimate of the expected reward:\nEquation 8 indeed describes how we estimate the expected reward, we also added the form of the estimator of the expected gradient for a sample i to make what me meant clearer\n\nBeam search and Approximate probabilities\nEquation 10 indeed had a typo. The i in “i \\in 1..C” should have been a “r”, making the product a product of the probability of the C programs sampled from the approximate probability distribution. At a search depth of d, the heuristic used to prune candidate paths is the probability of the prefix (which you can think of as the product of equation (5) but limited to the first d terms). \n\nWe arrive at a search depth d with a set of S candidates. For each of these S candidates, we obtain the probability of the next token using the softmax. Combining the probability of this token with the product of the whole path that comes before it, we obtain the probability for S * (nb_possible_token) possible paths. We only keep the S best ones (possibly removing the ones that have reached a termination symbol) and repeat the step at the depth d+1.\nWe end up at the end with a set of S samples which are going to be used as the basis for our approximate distribution. We have added more description of the process to make it clearer.\n\nWhen syntax checking is available, whether in its learned form or not, it is implicitly included as its contribution is introduced just before the softmax (see Figure 2 if you can zoom in). A token judged non-syntactically correct would have a probability of zero, so the probability of the path containing it would be zero and would therefore not be included into the promising paths going to the next stage.\n\n\n- Is there value in learning syntax?\nIt might be possible to have access to a large amount of programs in a language without having access to a syntax checker, such as for example if we have downloaded a large amount of programs from a code repository. Moreover, it might be useful even for common languages: Note that what we require is a bit different to a traditional syntax checker: answering the question “is this program syntactically correct”, which any compiler would give; as opposed to what we have in equation 13 which corresponds to “Do these first t tokens contain no syntax error and may therefore be a valid prefix to a program”. The syntax checker we need has to return a decision even for non-complete program, therefore it would require some work to transform current compilers to return such answers.\nFinally, as shown in our experiments, using a learned syntax checker might perform better than using a formal one, as it can capture what represents an “idiomatic” program vs. a technically correct one.\n\n\n\f\n", "Dear authors, \n\nThanks for your interesting paper that I enjoyed a lot. It is great to read new approaches in the field of program synthesis and its promising results. I believe the contributions of the paper are clear but the experimental details are not sufficient to reproduce the results. Below is the list that I found missing in the paper:\n\n1. Sampling method for input/output grid world (ex. # of markers, # of obstacles, Code blocks like repeat(19) { repeat (15) { ... }} or repeat(17) { turnRight } might work as noises)\n2. Sampling method for Karel program (ex. max # of tokens, max depth of program)\n3. How to deal with corner cases like program with endless loop\n4. 52 tokens for Karel DSL\n5. Batch size\n6. Detailed on beam search\n\nBecause the sampling methods of world and program are critical to set the difficulty of the problem, I think the authors could discuss in more details about it to extend the suggested methods. Can the author offer some details on this?\n\nThe current attempt to reproduce the Karel dataset can be found https://github.com/carpedm20/karel and https://github.com/carpedm20/program-synthesis-rl-tensorflow." ]
[ 5, 6, 7, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1Xw62kRZ", "iclr_2018_H1Xw62kRZ", "iclr_2018_H1Xw62kRZ", "Sy8bJjuMM", "HkcxQ4Rxf", "H1JSNUjeG", "Hk4_Jw9xG", "iclr_2018_H1Xw62kRZ" ]
iclr_2018_HJzgZ3JCW
Efficient Sparse-Winograd Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations. Second, we prune the weights in the Winograd domain to exploit static weight sparsity. For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x. We also show that moving ReLU to the Winograd domain allows more aggressive pruning.
accepted-poster-papers
The paper presents a modification of the Winograd convolution algorithm that reduces the number of multiplications in a forward pass of a CNN with minimal loss of accuracy. The reviewers brought up the strong results, the readability of the paper, and the thoroughness of the experiments. One concern brought up was the applicability to deeper network structures. This was acknowledged by the authors to be a subject of future work. Another issue raised was the question of theoretical vs. actual speedup. Again, this was acknowledged by the authors to be an eventual goal but subject to further systems work and architecture optimizations. The reviewers were consistent in their support of the paper. I follow their recommendation: Accept.
train
[ "Hk0i6DUVz", "SyMeSO8ef", "rJMLjDqeM", "HJ8UsZ6gM", "rktxb6fNM", "BJdkWgiGz", "ryvYuSgzf", "B171DHlfG", "BkOMrBeGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for the clarifications! I think the score is still appropriate.", "This paper proposes to combine Winograd transformation with sparsity to reduce the computation for deep convolutional neural network. Specifically, ReLU nonlinearity was moved after Winograd transformation to increase the dynamic sparsity in the Winograd domain, while an additional pruning on low magnitude weights and re-training procedure based on pruning is used to increase static sparsity of weights, which decreases computational demand. The resulting Winograd-ReLU\nCNN shows strong performance in three scenarios (CIFAR10 with VGG, CIFAR100 with ConvPool-CNN-C, and ImageNEt with ResNet-18). The proposed method seems to improve over the two baseline approaches (Winograd and sparsity, respectively).\n\nOverall, the paper is well-written and the experiments seems to be quite thorough and clear. Note that I am not an expert in this field and I might miss important references along this direction. I am leaving it to other reviewers to determine its novelty. \n\nPutting ReLU in the Winograd domain (or any transformed domain, e.g., Fourier) seems to be an interesting idea, and deserves some further exploration. Also, I am curious about the performance after weight pruning but before retraining).", "This paper proposes a method to build a CNN in the Winograd domain, where weight pruning and ReLU can be applied in this domain to improve sparsity and reduce the number of multiplication. The resultant CNN can achieve ~10x theoretical speedup with little performance loss.\n\nThe paper is well-written. It provides a new way to combine the Winograd transformation and the threshold-based weight pruning strategy. Rather than strictly keeping the architecture of ordinary CNNs, the proposed method applied ReLU to the transform domain, which is interesting. \n\nThe results on Cifar-10 and ImageNet are promising. In particular, the pruned model in the Winograd domain performs comparably to the state-of-the-art dense neural networks and shows significant theoretical speedup. \nThe results on ImageNet using ResNet-18 architecture are also promising. However, no results are provided for deeper networks, so it is unclear how this method can benefit the computation of very deep neural networks \n\nA general limitation of the proposed method is the network architecture inconsistency with the ordinary CNNs. Due to the location change of ReLUs, it is unclear how to transform a pretrained ordinary CNNs to the new architectures accurately. It seems training from scratch using the transformed architectures is the simplest solution. \n\nThe paper does not report the actual speedup in the wall clock time. The actual implementation is what matters in the end. \n\nIt will be more informative to present Figure 2,3,4 with respect to the workload in addition to the weight density. \n\n", "Summary: \nThe paper presents a modification of the Winograd convolution algorithm that enables a reduction of multiplications in a forward pass of 10.8x almost without loss of accuracy. \nThis modification combines the reduction of multiplications achieved by the Winograd convolution algorithm with weight pruning in the following way:\n- weights are pruned after the Winograd transformation, to prevent the transformation from filling in zeros, thus preserving weight sparsity\n- the ReLU activation function associated with the previous layer is applied to the Winograd transform of the input activations, not directly to the spatial-domain activations, also yielding sparse activations\n\nThis way sparse multiplication can be performed. Because this yields a network, which is not mathematically equivalent to a vanilla or Winograd CNN, the method goes through three stages: dense training, pruning and retraining. The authors highlight that a dimension increase in weights and ReLU activations provide a more powerful representation and that stable dynamic activation densities over layer depths benefit the representational power of ReLU layers.\n\nReview:\nThe paper shows good results using the proposed method and the description is easy to follow. I particularly like Figure 1. \nI only have a couple of questions/comments:\n1) I’m not familiar with the term m-specific (“Matrices B, G and A are m-specific.”) and didn’t find anything that seemed related in a very quick google search. Maybe it would make sense to add at least an informal description.\n2) Although small filters are the norm, you could add a note, describing up to what filter sizes this method is applicable. Or is it almost exactly the same as for general Winograd CNNs?\n3) I think it would make sense to mention weight and activation quantization in the intro as well (even if you leave a combination with quantization for future work), e.g. Rastegari et al. (2016), Courbariaux et al. (2015) and Lin et al. (2015)\n4) Figure 5 caption has a typo: “acrruacy”\n\nReferences:\nCourbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David. \"Binaryconnect: Training deep neural networks with binary weights during propagations.\" In Advances in Neural Information Processing Systems, pp. 3123-3131. 2015.\nLin, Zhouhan, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. \"Neural networks with few multiplications.\" arXiv preprint arXiv:1510.03009 (2015).\nRastegari, Mohammad, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. \"Xnor-net: Imagenet classification using binary convolutional neural networks.\" In European Conference on Computer Vision, pp. 525-542. Springer International Publishing, 2016.", "Thanks for the additional comments. I keep the rating. ", "Thanks for the response. I hold a positive opinion on this paper. ", "Thanks for your comments. \nWe agree that placing activation functions in other domains (e.g. Fourier) could hold more promise than we've uncovered so far.\nAs far as accuracy before re-training, since we used iterative pruning and re-training, we provide top-5 accuracy drop for each pruning step of Winograd-ReLU CNN on ImageNet:\n\noriginal density | pruned density | original accuracy | pruned accuracy (without re-training)\n100% | | 87.43% | \n70% | 60% | 87.456% | 87.338% \n60% | 50% | 87.424% | 87.202% \n50% | 40% | 87.406% | 86.672%\n40% | 35% | 87.406% | 86.784%\n35% | 30% | 87.358% | 86.286%\n30% | 25% | 87.228% | 85.692%\n25% | 20% | 86.898% | 84.466%\n20% | 15% | 86.570% | 80.430%\n15% | 12% | 86.246% | 79.246%\n12% | 10% | 85.916% | 77.128%\n", "We appreciate your comments and questions; thank you. Let us address each in turn:\n1) We agree, more work is warranted for deeper networks; we plan to explore this in the future.\n2) It is true that the Winograd-ReLU CNN network architecture is not equivalent to an ordinary Winograd CNN. However, training a Winograd-ReLU network from scratch is a fairly simple solution. In fact there's no transformation from ordinary CNN weights to Winograd-ReLU CNN weights: the ReLU layer sizes are different. This cannot be compensated by any weight transformation.\n3) While a reduction in wall clock time is the eventual goal, we focus here on a novel network type that reduces the theoretical number of operations needed, rather than the systems work needed to accelerate it. This will need careful design with attention to architecture optimizations and tradeoffs, and we leave this as future work.\n4) We'll try to find a clear way to present both density and workload in these figures, thanks for the suggestion.", "Thank you for your questions and comments; please allow us to address them here.\n1) You are right to be confused - this should have been \"p-specific,\" meaning the values of B, G, and A depend on p. We'll correct this in a future version.\n2) In general, our approach can be used wherever general Winograd convolutions can be used. B, G, and A will be different for different patch sizes and filter sizes, and of course, we leave finding these and experimenting with larger sizes as future work.\n3) Quantization approaches could fit well in the introduction; we'll try to find a way to make it clear that it may be orthogonal to pruning and Winograd convolutions.\n4) Thanks for catching this typo.\n" ]
[ -1, 7, 7, 8, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "BkOMrBeGM", "iclr_2018_HJzgZ3JCW", "iclr_2018_HJzgZ3JCW", "iclr_2018_HJzgZ3JCW", "ryvYuSgzf", "B171DHlfG", "SyMeSO8ef", "rJMLjDqeM", "HJ8UsZ6gM" ]
iclr_2018_Sk6fD5yCb
Espresso: Efficient Forward Propagation for Binary Deep Neural Networks
There are many applications scenarios for which the computational performance and memory footprint of the prediction phase of Deep Neural Networks (DNNs) need to be optimized. Binary Deep Neural Networks (BDNNs) have been shown to be an effective way of achieving this objective. In this paper, we show how Convolutional Neural Networks (CNNs) can be implemented using binary representations. Espresso is a compact, yet powerful library written in C/CUDA that features all the functionalities required for the forward propagation of CNNs, in a binary file less than 400KB, without any external dependencies. Although it is mainly designed to take advantage of massive GPU parallelism, Espresso also provides an equivalent CPU implementation for CNNs. Espresso provides special convolutional and dense layers for BCNNs, leveraging bit-packing and bit-wise computations for efficient execution. These techniques provide a speed-up of matrix-multiplication routines, and at the same time, reduce memory usage when storing parameters and activations. We experimentally show that Espresso is significantly faster than existing implementations of optimized binary neural networks (~ 2 orders of magnitude). Espresso is released under the Apache 2.0 license and is available at http://github.com/organization/project.
accepted-poster-papers
This paper describes a new library for forward propagation of binary CNNs. R1 for clarification on the contributions and novelty, which the authors provided. They subsequently updated their score. I think that optimized code with permissive licensing (as R2 points out) benefits the community. The paper will benefit those who decide to work with the library.
train
[ "HJCLeXtgM", "SyRQ7Vq1G", "HymwoY3lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a library written in C/CUDA that features all the functionalities required for the forward propagation of BCNNs. The library is significantly faster than existing implementations of optimized binary neural networks (≈ 2 orders of magnitude), and will be released on github.\n\nBCNNs have been able to perform well on large-scale datasets with increased speed and decreased energy consumption, and implementing efficient kernels for them can be very useful for mobile applications. The paper describes three implementations CPU, GPU and GPU_opt, but it is not entirely clear what the differences are and why GPU_opt is faster than GPU implementation.\n\nAre BDNN and BCNN used to mean the same concept? If yes, could you please use only one of them?\n\nThe subsection title “Training Espresso” should be changed to “Converting a network to Espresso”, or “Training a network for Espresso”.\n\nWhat is the main difference between GPU and GPU_opt implementations?\n\nThe unrolling and lifting operations are shown in Figure 2. Isn’t accelerating convolution by this method a very well known one which is implemented in many deep learning frameworks for both CPU and GPU?\n\nWhat is the main contribution that makes the framework here faster than the other compared work? In Figure1, Espresso implementations are compared with other implementations in (a)dense binary matrix multiplication and (b)BMLP and not (c)BCNN. Can others ( BinaryNet\n(Hubara et al., 2016) or Intel Nervana/neon (NervanaSystems)) run CNNs?\n\n6.2 MULTI-LAYER PERCEPTRON ON MNIST – FIGURE 1B AND FIGURE 1E. It should be Figure 1d instead of 1e? \n\nAll in all, the novelty in this paper is not very clear to me. Is it bit-packing?\n\n\nUPDATE: \nThank you for the revision and clarifications. I increase my rating to 6.\n\n", "This paper builds on Binary-NET [Hubara et al. 2016] and expands it to CNN architectures. It also provides optimizations that substantially improve the speed of the forward pass: packing layer bits along the channel dimension, pre-allocation of CUDA resources and binary-optimized CUDA kernels for matrix multiplications. The authors compare their framework to BinaryNET and Nervana/Neon and show a 8x speedup for 8092 matrix-matrix multiplication and a 68x speedup for MLP networks. For CNN, they a speedup of 5x is obtained from the GPU to binary-optimizimed-GPU. A gain in memory size of 32x is also achieved by using binary weight and activation during the forward pass.\n\nThe main contribution of this paper is an optimized code for Binary CNN. The authors provide the code with permissive licensing. As is often the case with such comparisons, it is hard to disentangle from where exactly come the speedups. The authors should provide a table with actual numbers instead of the hard-to-read bar graphs. Otherwise the paper is well written and relatively clear, although the flow is somewhat unwieldy. \n\nOverall, i think it makes a good contribution to a field that is gaining importance for mobile and embedded applications of deep convnets. I think it is a good fit for a poster.", "The paper presents an implementation strategy (with code link anonymized for review) for fast computations of binary forward inference. The paper makes the approach seem straightforward (clever?) and there has been lots of work on fast inference of quantized, low-bit-width neural networks, but if indeed the implementation is significantly faster than commercial alternatives (e.g. from Intel) then I expect the authors have made a novel and useful contribution.\n\nThe paper is written clearly, but I am not an expert in alternative approaches in this area." ]
[ 6, 7, 7 ]
[ 3, 4, 1 ]
[ "iclr_2018_Sk6fD5yCb", "iclr_2018_Sk6fD5yCb", "iclr_2018_Sk6fD5yCb" ]
iclr_2018_r11Q2SlRW
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.
accepted-poster-papers
This paper proposes a real-time method for synthesizing human motion of highly complex styles. The key concern raised by R2 was that the method did not depart greatly from a standard LSTM: parts of the generated sequences are conditioned on generated data as opposed to ground truth data. However, the reviewer thought the idea was sensible and the results were very good in practice. R1 also agreed that the results were very good and asked for a more detailed analysis of conditioning length and some clarification. R3 brought up similarities to Professor Forcing (Goyal et al. 2016) -- also noted by R2 -- and Learning Human Motion Models for Long-term Predictions (Ghosh et al. 2017) -- noting not peer-reviewed. R3 also raised the open issue of how to best evaluate sequence prediction models like these. They brought up an interesting point, which was that the synthesized motions were low quality compared to recent works by Holden et al., however, they acknowledged that by rendering the characters this exposed the motion flaws. The authors responded to all of the reviews, committing to a comparison to Scheduled Sampling, though a comparison to Professor Forcing was proving difficult in the review timeline. While this paper may not receive the highest novelty score, I agree with the reviewers that it has merit. It is well written, has clear and reasonably thorough experiments, and the results are indeed good.
train
[ "H1b7FSwgM", "r1NGC2dlf", "S1Lqh4YxG", "B1uPnlTGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes acLSTM to synthesize long sequences of human motion. It tackles the challenge of error accumulation of traditional techniques to predict long sequences step by step. The key idea is to combine prediction and ground truth in training. It is impressive that this architecture can predict hundreds of frames without major artifacts.\n\nThe exposition is mostly clear. My only suggestion is to use either time (seconds) or frame number consistently. In the text, the paper sometimes use time, and other time uses frame index (e.g. figure 7 and its caption). It confuses me a bit since it is not immediate clear what the frame rate is.\n\nIn evaluation, I think that it is important to analyze the effect of condition length in the main text, not in the Appendix. To me, this is the most important quantitive evaluation that give me the insight of acLSTM. It also gives a practical guidance to readers how to tune the condition length. As indicated in Appendix B, \"Further experiments need to be conducted to say anything meaningful.\" I really hope that in the next version of this paper, a detailed analysis about condition length could be added. \n\nIn summary, I like the method proposed in the paper. The result is impressive. I have not seen an LSTM based architecture predicting a complex motion sequence for that long. However, more detailed analysis about condition length is needed to make this paper complete and more valuable.", "The problem of learning auto-regressive (data-driven) human motion models that have long-term stability\nis of ongoing interest. Steady progress is being made on this problem, and this paper adds to that.\nThe paper is clearly written. The specific form of training (a fixed number of self-conditioned predictions,\nfollowed by a fixed number of ground-truth conditioned steps) is interesting for simplicity and its efficacy.\nThe biggest open question for me is how it would compare to the equally simple stochastic version proposed\nby the scheduled sampling approach of [Bengio et al. 2015].\n\nPROS: The paper provides a simple solution to a problem of interest to many.\nCONS: It is not clear if it improves over something like scheduled sampling, which is a stochastic predecessor\n of the main idea introduced here. The \"duration of stability\" is a less interesting goal than\n actually matching the distribution of the input data.\n\nThe need to pay attention to the distribution-mismatch problem for sequence prediction problems\nhas been known for a while. In particular, the DAGGER (see below) and scheduled sampling algorithms (already cited) \ntarget this issue, in addition to the addition of progressively increasing amounts of noise during training\n(Fragkiadaki et al). Also see papers below on Professor Forcing, as well as \"Learning Human Motion Models\nfor Long-term Predictions\" (concurrent work?), which uses annealing over dropout rates to achieve stable long-term predictions.\n\n DAGGER algorithm (2011): http://www.jmlr.org/proceedings/papers/v15/ross11a/ross11a.pdf\n \"A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning\"\n\n Professor Forcing (NIPS 2016)\n http://papers.nips.cc/paper/6099-professor-forcing-a-new-algorithm-for-training-recurrent-networks.pdf\n\n Learning Human Motion Models for Long-term Predictions (2017)\n https://arxiv.org/abs/1704.02827\n https://www.youtube.com/watch?v=PgJ2kZR9V5w\n \nWhile the motions do not freeze, do the synthesized motion distributions match the actual data distributions?\nThis is not clear, and would be relatively simple to evaluate. Is the motion generation fully deterministic?\nIt would be useful to have probabilistic transition distributions that match those seen in the data.\nAn interesting open issue (in motion, but also of course NLP domains) is that of how to best evaulate\nsequence-prediction models. The duration of \"stable prediction\" does not directly capture the motion quality. \n\nFigure 1: Suggest to make u != v for the purposes of clarity, so that they can be more easily distinguished.\n\nData representation:\nWhy not factor out the facing angle, i.e., rotation about the vertical axis, as done by Holden et al, and in a variety of\nprevious work in general?\nThe representation is already made translation invariant. Relatedly, in the Training section,\ndata augmentation includes translating the sequence: \"rotate and translate the sequence randomly\".\nWhy bother with the translation if the representation itself is already translation invariant?\n\nThe video illustrates motions with and without \"foot alignment\".\nHowever, no motivation or description of \"foot alignment\" is given in the paper.\n\nThe following comment need not be given much weight in terms of evaluation of the paper, given that the\ncurrent paper does not use simulation-based methods. However, it is included for completeness.\nThe survey of simulation-based methods for modeling human motions is not representative of the body of work in this area\nover the past 25 years. It may be more useful to reference a survey, such as \n\"Interactive Character Animation Using Simulated Physics: A State‐of‐the‐Art Review\" (2012)\nAn example of recent SOTA work for modeling dynamic motions from motion capture, including many\nhighly dynamic motions, is \"Guided Learning of Control Graphs for Physics-Based Characters\" (2016)\nMore recent work includes \"Learning human behaviors from motion capture by adversarial imitation\", \n\"Robust Imitation of Diverse Behaviors\", and \"Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning\", all of which demonstrate imitation of various motion styles to various degrees.\n\nIt is worthwhile acknowledging that the synthesized motions are still low quality, particular when rendered with more human-like looking models, and readily distinguishable from the original motions. In this sense, they are not comparable to the quality of results demonstrated in recent works by Holden et al. or some other recent works. However, the authors should be given credit for including some results with fully rendered characters, which much more readily exposes motion flaws.\n\nThe followup work on [Lee et al 2010 \"Motion Fields\"] is quite relevant:\n\"Continuous character control with low-dimensional embeddings\"\nIn terms of usefulness, being able to provide some control over the motion output is a more interesting problem than\nbeing able to generate long uncontrolled sequences. A caveat is that the methods are not applied to large datasets.\n", "Paper presents an approach for conditional human (skeleton) motion generation using a form of the LSTM, called auto-conditioned LSTM (acLSTM). The key difference of acLSTM is that in it parts of the generated sequences, at regular intervals, are conditioned on generated data (as opposed to just ground truth data). In this way, it is claimed that acLSTM can anticipate and correct wrong predictions better than traditional LSTM models that only condition generation on ground truth when training. It is shown that trained models are more accurate at long-term prediction (while being a bit less accurate in short-term prediction). \n\nGenerally the idea is very sensible. The novelty is somewhat small, given the fact that a number of other methods have been proposed to address the explored challenge in other domains. The cited paper by Bengio et al., 2015 is among such, but by no means the only one. For example, “Professor Forcing: A New Algorithm for Training Recurrent Nets” by Goyal et al. is a more recent variant that does away with the bias that the scheduled sampling of Bengio et al., 2015 would introduce. The lack of comparison to these different methods of training RNNs/LSTMs with generated or mixture of ground truth and generated data is the biggest shortcoming of the paper. That said, the results appear to be quite good in practice, as compared to other state-of-the-art methods that do not use such methods to train. \n\nOther comments and corrections:\n\n- The discussion about the issues addressed not arising in NLP is in fact wrong. These issues are prevalent in training of any RNN/LSTM model. In particular, similar approaches have been used in the latest image captioning literature.\n\n- In the text, when describing Figure 1, unrolling of u=v=1 is mentioned. This is incorrect; u=v=4 in the figure.\n\n- Daniel Holden reference should not contain et. al. (page 9)", "We would like to thank all the reviewers. We especially appreciate being informed of relevant works which we have overlooked and mistakes in the paper, which we are happy to add/revise. However, one work mentioned by Reviewer3, \"Learning Human Motion Models for Long-term Predictions (2017)\", is not currently peer-reviewed and so we do not feel a need for its inclusion. Furthermore, the proposed approach therein is similar to \"Recurrent Network Models for Human Dynamics\", which we already compare with.\n\nBoth Rev2 and Rev3 suggest including scheduled sampling and professor forcing for comparison. We agree that adding a comparison with scheduled sampling is definitely appropriate and it will be included in the final upload. Regarding professor forcing, no publicly available implementation currently exists, and we could not get a working implementation even after contacting the authors. We are currently working on our own implementation of professor forcing, but we cannot seem to make it work. GANs are notoriously finicky to work with and require a lot of hyperparameter turning, so this result is not unexpected.\n\nWe further note that no GAN approaches have to date been shown effective for the problem of human-motion generation. Given this, and our initial experiments with it, it is our feeling that successfully using GAN approaches for generating human motion is in and of itself a noteworthy research problem, and exploration of the effects of professor-forcing should be addressed in such a work, but is outside the scope of this paper.\n\nQ (R3): What is \"foot alignment\"?\nA: We postprocess the animation so that when the foot is in contact with the ground (the height of the foot is close to 0), any motion of the foot in the XZ plane is stopped, and the rest of the body moves instead. This is an easy fix to \"sliding\" feet in the animation, while keeping the relative pose of the skeleton the same. \n\nQ (R3): Why not factor out the facing angle, i.e., rotation about the vertical axis, as done by Holden et al, and in a variety of\nprevious work in general?\nA: No particular reason - we simply decided we wanted to keep the prediction format consistent. If we factored out the facing angle, then we would need to predict relative hip-rotation per-frame in addition to displacement. We decided to predict only displacement. \n\nQ: (R3): Why bother with the translation if the representation itself is already translation invariant?\nA: Thanks for pointing this out. In earlier experiments, we predicted the absolute hip position at every frame, instead of its relative displacement from the hip of the previous frame. You are correct - in the current formulation, it is translation invariant. We will edit the text to reflect that. \n\nQ: (R3): Is the motion generation fully deterministic?\nA: Yes. There is no probabilistic model involved. Each frame of motion is completely determined by a 171-dimensional vector, representing the positions of 57 joint locations. We predict these joint locations in space, using L2 norm during training. \n\nQ: (R3): An interesting open issue (in motion, but also of course NLP domains) is that of how to best evaulate\nsequence-prediction models. The duration of \"stable prediction\" does not directly capture the motion quality. \nA: This is definitely true, and it is apparent at times that our motion is not realistic. The reason we focused much of the discussion on duration is because previous works were unable to achieve even this, and duration is clearly a precondition necessary for further evaluation of quality. Previous works were not able to generate stable motion for more than a couple of seconds for simple motions such as walking or smoking, let alone dancing. Without first establishing a method that can at least run for a reasonable amount of time without failing, serious discussion of motion quality is impossible. \n\nQ: (R3): While the motions do not freeze, do the synthesized motion distributions match the actual data distributions?\nThis is not clear, and would be relatively simple to evaluate.\nA: It seems that the networks trained on distinct datasets reflect features unique to those datasets: martial arts motion shows punching, kicking; dancing networks shows dance, and the walking/running network only outputs continuous walking/running. Do you have in mind additional quantitative evaluations for comparing distributions, besides euclidean error? We are happy to consider it. \n\nQ (R1): I really hope that in the next version of this paper, a detailed analysis about condition length could be added. \nA: We agree this would be ideal, but to fully address this issue, further theoretical analysis is also necessary. We are currently working on providing such theoretical work (WHY auto-conditioning works), which we believe is appropriate for future work. There is not much to say about the quantitative results on condition-length currently, which is why they are included in the appendix. \n\n" ]
[ 7, 7, 6, -1 ]
[ 3, 5, 5, -1 ]
[ "iclr_2018_r11Q2SlRW", "iclr_2018_r11Q2SlRW", "iclr_2018_r11Q2SlRW", "iclr_2018_r11Q2SlRW" ]
iclr_2018_SyMvJrdaW
Decoupling the Layers in Residual Networks
We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The resulting warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al (2016). We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks.
accepted-poster-papers
This paper proposes a “warp operator” based on Taylor expansion that can replace a block of layers in a residual network, allowing for parallelization. Taking advantage of multi-GPU parallelization the paper shows increased speedup with similar performance on CIFAR-10 and CIFAR-100. R1 asked for clarification on rotational symmetry. The authors instead removed the discussion that was causing confusion (replacing with additional experimental results that had been requested). R2 had the most detailed review and thought that the idea and analysis were interesting. They also had difficulty following the discussion of symmetry (noted above). They also pointed out several other issues around clarity and had several suggestions for improving the experiments which seem to have been taken to heart by the authors, who detailed their changes in response to this review. There was also an anonymous public comment that pointed out a “fatal mathematical flaw and weak experiments”. There was a lengthy exchange between this reviewer and the authors, and the paper was actually corrected and clarified in the process. This anonymous poster was rather demanding of the authors, asking for latex-formatted equations, pseudo-code, and giving direction on how to respond to his/her rebuttal. I don't agree with the point that the paper is flawed by "only" presenting a speed-up over ResNet, and furthermore the comment of "not everyone has access to parallelization" isn’t a fair criticism of the paper.
val
[ "ryFgDsREM", "rycPJEAVM", "Sy_NM8aNG", "BJAGxsHNf", "HJ6XbgpEG", "HyzlPHYNG", "rkDRpk_4M", "BJ9xeoSNf", "B1pRJiHNG", "r1-31oHNM", "r1nrJjSVG", "ryCv5QFgz", "r1NvXZ9ez", "S1wxhnsef", "B1DKKRcmf", "SJdktR5mz", "rk7hdR5mf", "ry04OA9mf", "BkrnPR5mf", "BJbpaOlGM", "S16X2Oeff", "HymI_KDbM", "SJ9OwtPWM", "BJXpGRVWf", "BkcxXAN-f" ]
[ "author", "public", "author", "public", "public", "author", "author", "public", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "author", "public", "public" ]
[ "Our formula (3) and its proof are correct and sound. We believe that your conclusion is based on a critical misunderstanding of our approach. As evident from the derivation of Equation 9 from Equation 8, our approximation is built upon MANY local Taylor expansions to build an approximation for each layer.\n\nYour argument is based on your misunderstanding that we tried to expand F3(x+F1+F2) or Fn(x+F1+F2+…+F_n-1) using only ONE Taylor expansion as indicated in one of your responses. This is not what we did at all. Thus, we would like to say again that your example and statements in Prelim (1) are not valid, and misleading. WarpNet has a solid theoretical basis, and is demonstrated in our experiments to be a good approximation to ResNet.", "\"Your argument in Prelim (1) is also not valid as we pointed out in our last reply and below. We would like to reiterate that our experiments have confirmed that our framework indeed works.\"\n\nYour framework \"working\" has absolutely nothing to do with whether the Taylor expansion is accurate or not. My main problem is that equation (3) in the paper is wrong and therefore may fundamentally confuse readers about the nature of ResNets. However, even if (3) is false and WarpNet does not approximate ResNet, that doesn't mean WarpNet isn't trainable. Here are some things that are not an approximation of a given ResNet: vanilla nets, decision trees, kernel machines, ResNet's with different initial weights, ResNet's with different nonlinearities. Yet these models can all be trained. So the argument \"WarpNet can be trained, therefore it produces the same outputs as ResNets\" is nonsensical.\n\nPrelims (1)-(3) are all true and explain in detail the Taylor approximation is inaccurate.\n\nThe two alternative models I mentioned don't have have a theoretical basis, yes, but neither has WarpNet, because the Taylor expansion doesn't work.", "As we mentioned before, your statements in Prelims (2) and (3) are not relevant to WarpNet. Both our forward and backward passes are based on the result of the Taylor expansion. Your argument in Prelim (1) is also not valid as we pointed out in our last reply and below. We would like to reiterate that our experiments have confirmed that our framework indeed works.\n\nThe general expression of the function is F_(i+1)(x_i+F_i). Suppose ||Fi||/||x_i|| ~ 1/sqrt(i). ||Fi||< ||xi|| for layers beyond the first one. Your example in Prelim (1) is far off for any of the layers (including the first one), and cannot be used to illustrate anything relevant to our approximation with the Taylor expansion. \n\nThe whole point of doing the experiments is to see whether the approximation is good, and we demonstrated that the Taylor expansion leads to a good approximation to ResNet with much shorter training time.\n\nThere are certainly ways to further approximate WarpNet or ResNet. The main purpose of our experiments is to illustrate the effectiveness of WarpNet compared to ResNet and data-parallelized ResNet. We also provided the results of one approximation to WarpNet. There are certainly other ways to further approximate WarpNet or ResNet. But it is beyond the topic of this paper. The reason we have F’ in the forward pass is due to the Taylor expansion. The two approximations you mentioned do not have a theoretical basis. ", "%My rebuttal can be pasted an viewed in Latex\n\n%If you want to respond to this rebuttal, please respond with a single post or a single block of posts below the last part of this comment (part X).\n\n%\n%\n%\n\nThe authors rebuttal and the updated version of the paper confirm that the authors have fundamental misconceptions about ResNet, Veit's work and the Taylor expansion which leads to many false, misleading and / or meaningless statements in the paper, as I show below in Part A.\n\nThe authors are correct in saying that those misconceptions and false statements do not necessarily affect the validity / merit of WarpNet. Evaluating the merit of WarpNet on its own terms is a worthwhile exercise. I will do this below in Part B. However, it turns out that the WarpNet model itself also lacks merit. The arguments presented in parts A and B independently require the paper to be rejected.\n\n%##################\n++++++ Part (A): misconceptions and false statements\n%##################\n\n+++ Prelim (1): The Taylor expansion. \n\nThe basic form of the Taylor expansion is: $f(x + e) = f(x) + ef'(x) + O(e^2)$, where $f$ is differentiable at $x$. The definition of the $O(e^2)$ is `a quantity that when divided by $e^2$ is bounded as $e$ converges to zero'. So the only thing we know about $O(e^2)$ is its behavior as $e$ tends to zero, but we don't know anything about what its value is for a given value of $e$. \n\nConsider the function $f(x) = \\max(0, 100x)$. Let $x = -0.1$ and $e=0.2$. Then we have $f(x + e) = 10$, but the Taylor expansion yields the value 0. So even though $e$ is quite small and $f$ is linear everywhere except at one point, the Taylor expansion is very inaccurate.\n\n+++ Prelim (2): Figure 6(b) of Veit et al.\n\nThis figure depicts the magnitude of the gradient along certain paths as a function of the number of residual derivatives contained in that path (i.e. the number of $F'$ terms contained in it). In this figure, we find that the path containing no residual derivatives has a magnitude of $\\approx 10^{-5}$ whereas the average path containing 30 residual derivatives has a magnitude of $\\approx 10^{-23}$. This tells us that each individual residual derivative has a size of around $\\big(\\frac{10^{-23}}{10^{-5}}\\big)^{-30} \\approx 0.25$. (CORRECTION: The formula should be $\\big(\\frac{10^{-23}}{10^{-5}}\\big)^{\\frac{1}{30}} \\approx 0.25$.) Assuming that no other scaling effects were involved in the creation of this graph, the fact that the path containing no residual derivatives has a magnitude of $\\approx 10^{-5}$ also tells us that the derivative of the error function had size of $\\approx 10^{-5}$, as it is the only derivative contained in that path.\n\n+++ Prelim (3): Explaining ResNet\n\nAs I previously explained and as the authors acknowledged, in a batch-ReLU ResNet we have $\\frac{||F_i||}{||h_i||} \\sim \\frac{1}{\\sqrt{i}}$. But crucially we also have $\\frac{||F_i||}{||h_i||} \\approx \\frac{||F'_i||}{||h'_i||}$. This is because all operations involved in $h$ and $F$ are very similar when evaluated in the forward and backward direction. \n\n\\begin{itemize}\n\\item The identity function $h(x) = x$ is equivalent to multiplication with the identity matrix in both directions.\n\\item A linear transformation $Wx$ is equivalent to multiplication with $W$ in both directions.\n\\item ReLU, in both directions, is equivalent to multiplication with the same binary matrix of 0's and 1's depending on which neurons are activated.\n\\item While batchnorm only centers the mean in the forward directions, the multiplicative effect is the same in both directions.\n\\end{itemize}\n\nHence, we also have $\\frac{||F'_i||}{||h'_i||} \\sim \\frac{1}{\\sqrt{i}}$. In a 56-block ResNet as used by Veit, the average of all $\\frac{1}{\\sqrt{i}}$ values is $\\frac{1}{56}\\sum_1^{56} \\frac{1}{\\sqrt{i}} \\approx 0.24$. And now we come full circle by realizing that $0.24 \\approx 0.25$ so this analysis of ResNet matches Veit's results.", "First of all, apologies to the area chair(s) for the lengthy exchange. I'm not trying to cause unnecessary work or be confrontational. Since I decided to post an full review for the paper, I also decided to take on the same responsibility that I would take on as an official reviewer of the paper, which is to keep rebutting as long as new arguments are presented by the authors.\n\nOn to the rebuttal.\n\n.\n.\n.\n\nFirstly, I made a typo in Prelim (2). I apologize. The correct formula is $\\big(\\frac{10^{-23}}{10^{-5}}\\big)^{\\frac{1}{30}} \\approx 0.25$ instead of $\\big(\\frac{10^{-23}}{10^{-5}}\\big)^{-30} \\approx 0.25$. However, the point I was making very much stands. If a path containing 30 F' terms has size 10^-23 and a term containing one F' term has size 10^-5, then the typical F' term has size around 0.25, because 0.25^30 * 10^(-5) ~ 10^(-23)\n\n.\n.\n\nAs the authors pointed out, Prelim (1) is an example of where Taylor should not be used. That is precisely why I gave the example, to show why Taylor should not be used for ResNet in the way the authors are using it. In the max(0, 100x) example I gave with x = -0.1 and e=0.2, x < 0 and so the max() selects the 0 term. But x + e > 0, so max() selects the 100x term, and so the Taylor expansion is inaccurate. The inaccuracy is caused by the Taylor expansion only being aware of max(0,100x) locally at the location x=-0.1, so it cannot distinguish max(0,100x) from the zero function. This shows that when using Taylor, any sudden changes to the function between x and x + e leads to inaccuracy.\n\nThis is precisely what happens in ResNet! The authors expand, for example, F_2(x + F_1) around x. As we established long ago, ||F_i||/||h_i|| ~ 1/sqrt(i), so in this case ||F_1||/||x|| ~ 1/sqrt(1) = 1. Therefore, x and F_1 are of similar size. Let's assume we feed x into F_2. When we get to the ReLU layer, some ReLU's will not be activated (input < 0) and some ReLU's will be activated (input > 0). Now assume we feed x + F_1 into F_2. Since F_1 is of similar size to x, the values that go into the ReLU units will be substantially different. Therefore the effect I described in Prelim (1) will arise and a good number of ReLU inputs will switch from < 0 to > 0 and vice versa. Therefore, the Taylor approximation, which feeds x into F_2 instead of x + F_1 will incur an error. Now, that error will not be as big as in my Prelim (1) example where I used max(0,100x), but it will be significant. \n\nSo, when we replace F_2(x + F_1) with F_2(x) + F_1 + F_2'F_1, there will be a small, but not insignificant error. Now consider F_3. In reality, F_3 is applied to x + F_1 + F_2. In the Taylor expansion, F_3 is applied to x. Now the difference between x and x + F_1 + F_2 is even greater than the difference between x and x + F_1, so values fed into ReLU are more likely to flip from > 0 to < 0 and vice versa, so the inaccuracy of Taylor is larger. Now consider F_n. In reality, it is applied to x + F_1 + F_2 + .. + F_n-1. In the Taylor setting, it is applied to x. But F_1 + F_2 + .. + F_n-1 dominate x in magnitude, so the ReLU activation pattern will be completely different, and the value of the Taylor approximation will be completely different from the true value. Therefore, the Taylor errors grow from layer to layer and also compound in the sense that they are added together. When we get to the networks output layer, those errors dominate. Therefore the theory presented by the authors in section 2, and formular (3) in particular, is incorrect.\n\n.\n.\n\nIn the authors latest rebuttal, they discuss my use of the phrase \"higher order terms\". What I meant by that term is not what they claim I meant, but in the interest of brevity. In that rebuttal, they also question my Prelim (3) from my previous rebuttal. I maintain that this analysis is correct. In the interest of brevity, I will not go into those points here. I think my earlier posts speak for themselves. If the area chairs want me to explain further, they can contact me via email. My identity is equal to that of AnonReviewer2 of the paper \"Tandem Blocks in Deep Convolutional Neural Networks\".\n\n.\n.\n\nFinally, when I say the ``experiments are insufficient'', I don't mean in terms of the number of experiments. You could run experiments on a hundred different datasets, but one question remains: Why should I prefer, say, x + F_1(x) + F_2(x) + F'_2(x)F_1(x) to x + F_1(x) + F_2(x) or to x + F_1(x) + F_2(x) + F_1(x)F_2(x). Those are much simpler models that achieve the exact same objective of decoupling. The complexity of using derivatives in the forward pass is unjustified. Again, I'm not saying that WarpNet is unsound in the sense that it is an ineffective model. I am simply saying that the same benefits of WarpNet can likely be achieved without derivatives in the forward pass. If you want to advocate for derivatives in the forward pass, you need to show that they are necessary. Since you did not do that, the experiments are insufficient in that sense.", "We have studied the Researcher's Prelims (2) and (3). We do not think the formula in the middle of Prelim (2), that is, $\\big(\\frac{10^{-23}}{10^{-5}}\\big)^{-30} \\approx 0.25$, holds. We doubt about Prelim (3) as well.\n\nWe find that the statement made by the Researcher in prelim 3 does not seem to be qualitatively correct. For instance, it is stated that \"This is because all operations involved in $h$ and $F$ are very similar when evaluated in the forward and backward direction.\" However, it can be seen that ||h'|| is the norm of the kronecker delta, drastically different when compared to ||h||. Further, in F', the non-linearity layers have inherently different shapes - for sigmoid and tanh, the forward pass is only zero at the origin, whereas in the backward pass, the gradient of sigmoid and tanh are essentially zero in the flat region of sigmoid and tanh away from the origin. Similarly for ReLU, the gradient is the step function, and only has similar behaviors near values h = 1. F and F' behaves differently on a qualitative level. \n\nTherefore, it does not appear that this is the correct explanation ||F'||/||h'|| has a similar value to ||F||/||h||, as F and F' behave drastically differently in backward and forward passes. We think that this is overreaching on a scale larger than us ignoring the second term in the gradient decoupling section even if ||F'|| is of order O(0.1).\n\nIn addition, our discussion on gradient decoupling in Section 2 is just to provide an analysis on ResNet. In WarpNet, we did not use gradient decoupling in the backward pass as shown in Section 3.2. To prevent the readers from thinking that we used gradient decoupling in the backward pass of WarpNet, we think we should drop off the paragraphs related to gradient decoupling in Section 2 (which only pertains to ResNet).\n\nSince the Researcher's Prelims (2) and (3) are only related to our discussions on gradient decoupling, this treatment resolves our dispute in this regard.\n\nRegarding the Researcher's comment “the experiments are still insufficient”, we would like to emphasize that in the revision we provided new experimental results from over 110 runs. The new experiments were done with different parameter settings on 3 data sets. We also compared with data parallelism. We are surprised to discover that the Researcher insists that the experiments are insufficient. We worked extremely diligently in the past two months to obtain these new results, and our results confirmed that WarpNet is an effective and much faster alternative to ResNet. We hope our innovative contribution can be properly recognized!\n\nRegarding the Researcher's comment \"not everyone has the resources to parallelize training\", now it is quite common for deep learning researchers and AI companies to have or have access to GPU servers with multiple GPUs. Companies that provide GPU cloud services (such as Google, IBM, Amazon, Oracle and Nvidia) have hundreds of GPUs in their cloud. We believe our proposed WarpNet will be very useful to researchers and practitioners who look for fast deep learning solutions.", "Dear Researcher,\n\nWe think that the debate between the Researcher and us rises partially from a misunderstanding from the Researcher's part on when one can apply the Taylor expansion.\n\nThe Taylor expansion can only be applied to provide a LOCAL polynomial type of approximation of a function value at a relatively small neighborhood of a given point which does not lie on any boundary. Mathematically speaking, a Taylor expansion should only be done in a small neighborhood of a given x, say (x-e, x+e), where e is a small numerical term.\n\nMany of the Researcher's arguments are based on a constructed example in Prelim 1) which tried to demonstrate the inaccuracy of the Taylor expansion. However, that example is irrelevant since a Taylor expansion should NOT be applied in the first place. In particular, e is much bigger than x so it violates the condition of using Taylor expansion. It is akin to forcing a short term weather forecast model to do a long term prediction.\n\nIn our framework, all conditions of applying the Taylor expansion have been checked carefully. In our case of F2(x1+F1), x = x1 and e = F1. It is clear that F1 < x1.\n\nWe find that the Taylor expansion is not only suitable but also brings very significant computational advantages as explained in the next paragraph.\n\nThe WarpNet (from the Taylor expansion) provides a novel way of training ResNets in parallel: model parallelism in which different sets of weights are trained in parallel (which we believe is attractive to the industry). It can be trained faster compared to mini-batch parallelization of ResNet (Section 4.3), and allows a larger network to be trained (for which ResNet fails to train due to GPU memory limitation, see Section 4.2) as WarpNet splits the weight storage into different GPUs. This offers significant advantages since GPU memory is often quite limited. Our experimental results also show that the predictive performance of WarpNet and that of the corresponding ResNet are very similar owing to the good Taylor expansion. Further, we find that the validation error curve of WarpNet can lie on top of each other with that of ResNet throughout training (Figure 2).\n\nAfter carefully reading the Researcher's responses, it appears that some of the disputes arise from different definitions of \"high order terms\". For instance, in Part [2/5], the Researcher mentions that $O(e^2)$ terms dominate, and that \"While it is true that individual higher-order terms are smaller, this is more than made up by their greater frequency.\" in an earlier response. We conclude that the \"high order terms\" the Researcher refer to is not the same as our \"high order terms\".\n\nTo be more specific, the Researcher appears to refer to the binomial number of terms in powers of $F'$ by multiplying out the gradient $(h'_1 + F'_1)(h'_2 + F'_2)..$ in a ResNet. Following our convention in the paper, we denote these as binomial terms. However, our \"high order terms\" correspond to the order of derivative terms in the Taylor expansion. The binomial terms and the Taylor series terms are completely different. In our analytical analysis we included -all- binomial terms, which the Researcher refers to as \"high order terms\". The Taylor series is truncated by the ReLU to first order. However, in Appendix A we have shown that using multiple iterations of Taylor expansions layer by layer results in a binomial number of first order terms, where each binomial term consists of a product of $k-1$ $F'$. Nowhere in the analytical analysis we have neglected first order Taylor series terms of power larger than 1 in $F'$. We believe that this resolves the discrepancy between our results and those stated by the Researcher.", "+++ Rebutting the authors first rebuttal (Title \"No fatal mathematical flaws in our analysis\") \n\nAuthor: While it is true that $||F||/||h|| \\sim 1/\\sqrt{i}$, the gradient norms $||F'||$ are typically of order $10^{-6}$ and smaller, see Figure 6(b) in Veit.\n\nResponse: As I explained in Prelim 2 and 3, the gradient norms are of the same order as the forward norms, and figure 6(b) of Veit does not mean what you think it means.\n\nAuthor: Actually, the value of $||F||/||h||$ has nothing to do with the validity of using the first order Taylor series to expand F.\n\nResponse: In prelim 1, I showed how the Taylor expansion can be highly inaccurate. In your paper, you expand functions around $h$ with perturbation $F$. So the larger the perturbation, the greater the inaccuracy of the Taylor expansion tends to be. Therefore the value of $||F||/||h||$ matters.\n\nAuthor: we do not need small a $||F||/||h||$ to perform the Taylor expansion on $F$\n\nResponse: As mentioned above, if the expansion is to be accurate and thus meaningful, the perturbation must be small.\n\nAuthor: Regrettably, we are unable to find any form of series expansion in Veit's paper.\n\nResponse: By ``expansion'' I simply mean multiplying out the gradient $(h'_1 + F'_1)(h'_2 + F'_2)..$ into its $2^L$ components.\n\nAuthor: The $O(e^2)$ term does not dominate with ReLU non-linearities. Veit's paper confirms our analytical results.\n\nResponse: The $O(e^2)$ term does dominate, and this is the crucial point. In Prelim 1, I showed how the first-order Taylor expansion can be highly inaccurate even if the function is piecewise linear. I used a ReLU-like function $\\max(100x,0)$ as an example. Now, of course the inaccuracy for a regular ReLU, i.e. $\\max(x,0)$ is going to be less. However, if you expand the second layer with respect to the first, and then the third layer with respect to that, and so forth, eventually those inaccuracies will compound and eventually dominate the approximation. \n\nI have no idea why you think Veit confirms the result you claim. The Veit paper did not even consider the Taylor expansion. The only expansion it contains is the one I referred to above, the multiplying out of the gradient. However this ``expansion'' is exact, because multiplying out is exact, as opposed to Taylor.", "+++ Rebutting the authors second rebuttal (Title ``No fatal mathematical flaws in our analysis (part 2'') \n\nAuthor: The approximation is almost exact with ReLU as both Veit's and our work together have shown, up to a path length of 25. We have performed \"parallelization\" over 3 layers in various settings in the past few weeks after obtaining access to servers with 4GPUs which allowed us to scale up our experiments. We found that the predictive accuracy remains similar to that of a ResNet with the same number of parameters, and the speedup of WarpNet over ResNet is increased when K=3 compared to K=2. ReLUs are used in our experiments.\n\nResponse: The approximation is not exact, as explained above. Also, the predictive accuracy has nothing to do with whether the approximation is exact (as I explain in Part B below).\n\nAuthor: The first order approximation is actually almost exact for the actual ResNet if ReLUs are used. We have analytically shown that the first order Taylor series expansion agrees with Veit's Figure 6(a) for the path length distribution, and 6(b) for the gradient scaling up to path lengths of about 25. Both Veit and we employed ReLUs. The validity of the linear approximation extends beyond the effective path length (Fig 6(c) in Veit) which is at most 20. This confirms that the first order series is almost exact.\n\nResponse: Figure 6 of Veit has absolutely nothing to do with the accuracy of the Taylor expansion, neither does the superficial similarity of the Taylor expansion and gradient ``expansion'' have anything to do with the accuracy.\n\nAuthor: We guess that the confusion might be caused by our infrequent mentioning that ReLU is used in our analysis, and we will state that we have used ReLU non-linearities more often in our paper.\n\nResponse: My criticisms have nothing to do with the fact that ReLU's were used. My criticisms are valid for all popular ResNet architectures.\n\nAuthor: There are two (not only once) appearances of $10^{-6}$ in Veit's paper. We took the $10^{-6}$ number from Figure 6(b). There it shows that the gradient norm descents from about $10^{-6}$ down to $10^{-25}$ at path length 25. Another appearance of $10^{-6}$ just before section 4.3 in Veit et al, we did not use that $10^{-6}$ there. \n\nResponse: See prelim 2.\n\nAuthor: Only a single point (the origin) in ReLU is non-differentiable. In any other region the first order term approximation is exact. As discussed before our work shows the first order approximation is almost exact with experimental validation from Veit et al.\n\nResponse: As I have shown in Prelim 1, even if the function is linear almost everywhere, the Taylor expansion is not necessarily exact.", "+++ Rebutting the new version of the paper\n\nPaper: The results imply that the output of a residual unit is just a small perturbation of the input.\n\nResponse: The authors have themselves acknowledged that $\\frac{||F_i||}{||h_i||} \\sim \\frac{1}{\\sqrt{i}}$, so this statement is false.\n\nPaper: We find that merely the first term in the series expansion is sufficient to explain the binomial distribution of path lengths and exponential gradient scaling experimentally observed by Veit et al. (2016)\n\nResponse: This statement makes no sense. The authors study the Taylor expansion, Veit studies the multiplied-out gradient. Those are different things.\n\nPaper: The approximation allows us to effectively estimate the output of subsequent layers using just the input of the first layer\n\nResponse: The estimate is inaccurate to the point of being meaningless.\n\nPaper: Below we show that the second and higher order terms are negligible, that is, the first order Taylor series expansion is almost exact, when ReLU activations are used. The second order perturbation terms all contain the Hessian $F''(x)$. \n\nResponse: The fact that $F''$ is zero does not help. No matter how far you expand the Taylor series, you will always have an $O(.)$ residue, and that residue, in the case of ReLU will not shrink no matter how far you expand, and will lead to fatal inaccuracy.\n\n(3) is false because higher-order terms are not negligible.\n\nPaper: ``If one takes F to have ReLU non-\nlinearity, then $F''(x, W) = 0$ except at the origin. The non-trivial gradient can be expressed almost\nexactly as $K(F'(x, W))k$. This validates the numerical results that the gradient norm decreases k exponentially with subnetwork depth as reported in (Veit et al., 2016).''\n\nResponse: This statement makes no sense, because the multiplied-out gradient does not contain $F''$ to begin with, so it is irrelevant that $F''$ vanishes.\n\nPaper: $\\partial_{W_1} F_1 \\approx 0$. \n\nResponse: This is wrong on two levels. First, the authors seem to pre-suppose that $\\frac{\\partial x_3}{\\partial W_1}$ is zero. This is not true. Even at a local minimum, only the derivative of the error with respect to $W_1$ is zero, but the derivative of $x_3$ with respect to $W_1$ is not necessarily zero. Secondly, $||F'_2|| \\approx 10^{-6}$ is false as explained above.\n\nPaper: the local minima is independent of parameters in subsequent residual units.\n\nResponse: The basic nature of deep networks is that parameters in each layer co-adapt to what parameters in other layers have learnt. This statement, if it were true, would contradict this basic nature.\n\nThe equation pertaining to the gradient of the Taylor expansion at the top of page 4 is inaccurate because the Taylor expansion is inaccurate.", "%##################\n++++++ Part (B): validity and merit of WarpNet\n%##################\n\nAs the authors pointed out, the statements made in sections 1 and 2 do not necessarily affect the merit of WarpNet. This is because the questions ``Does WarpNet approximate ResNet?'' and ``Can WarpNet learn successfully?'' are two different questions. We can view WarpNet simply as a model without considering its relation to ResNet and then consider its merit as a model.\n\nUnfortunately, the paper is also lacking in this area. The only postulated advantage of WarpNet over ResNet is the speedup obtained when parallelized. Firstly, not everyone has the resources to parallelize training. Secondly, in many cases those other GPUs are needed to evaluate other hyperparameter configurations. So the benefits of WarpNet are limited.\n\nFurthermore, WarpNet is likely over-complicated. Consider the WarpNet $x + F_1 + F_2 + F_2'F_1$. Why not use $x + F_1 + F_2 + F_2F_1$ instead? Or $x + F_1 + F_2$? I see no reason for the use of derivatives in the forward pass. Simpler models not using derivatives in the forward pass need to be shown to be inferior to WarpNet in order for WarpNet to have merit.", "Motivated via Talor approximation of the Residual network on a local minima, this paper proposed a warp operator that can replace a block of a consecutive number of residual layers. While having the same number of parameters as the original residual network, the new operator has the property that the computation can be parallelized. As demonstrated in the paper, this improves the training time with multi-GPU parallelization, while maintaining similar performance on CIFAR-10 and CIFAR-100.\n\nOne thing that is currently not very clear to me is about the rotational symmetry. The paper mentioned rotated filters, but continue to talk about the rotation in the sense of an orthogonal matrix applying to the weight matrix of a convolution layer. The rotation of the filters (as 2D images or images with depth) seem to be quite different from \"rotating\" a general N-dim vectors in an abstract Euclidean space. It would be helpful to make the description here more explicit and clear.", "Paper proposes a shallow model for approximating stacks of Resnet layers, based on mathematical approximations to the Resnet equations and experimental insights, and uses this technique to train Resnet-like models in half the time on CIFAR-10 and CIFAR-100. While the experiments are not particularly impressive, I liked the originality of this paper. ", "The main contribution of this paper is a particular Taylor expansion of the outputs of a ResNet which is shown to be exact at almost all points in the input space. This expression is used to develop a new layer called a “warp layer” which essentially tries to compute several layers of the residual network using the Taylor expansion expression — however in this expression, things can be done in parallel, and interestingly, the authors show that the gradients also decouple when the (ResNet) model is close to a local minimum in a certain sense, which may motivate the decoupling of layers to begin with. Finally the authors stack these warp layers to create a “warped resnet” which they show does about as well as an ordinary ResNet but has better parallelization properties.\n\nTo me the analytical parts of the paper are the most interesting, particularly in showing how the gradients approximately decouple. However there are several weaknesses to the paper (or maybe just things I didn’t understand). First, a major part of the paper tries to make the case that there is a symmetry breaking property of the proposed model, which I am afraid I simply was not able to follow. Some of the notation is confusing here — for example, presumably the rotations refer to image level rotations rather than literally multiplying the inputs by an orthogonal matrix, which the notation suggests to be the case. It is also never precisely spelled out what the final theoretical guarantee is (preferably the authors would do this in the form of a proposition or theorem).\n\nThroughout, the authors write out equations as if the weights in all layers are equal, but this is confusing even if the authors say that this is what they are doing, since their explanation is not very clear. The confusion is particularly acute in places where derivatives are taken, because the derivatives continue to be taken as if the weights were untied, but then written as if they happened to be the same.\n\nFinally the experimental results are okay but perhaps a bit preliminary. I have a few recommendations here:\n* It would be stronger to evaluate results on a larger dataset like ILSVRC. \n* The relative speed-up of WarpNet compared to ResNet needs to be better explained — the authors break the computation of the WarpNet onto two GPUs, but it’s not clear if they do this for the (vanilla) ResNet as well. In batch mode, the easiest way to parallelize is to have each GPU evaluate half the batch. Even in a streaming mode where images need to be evaluated one by one, there are ways to pipeline execution of the residual blocks, and I do not see any discussion of these alternatives in the paper.\n* In the experimental results, K is set to be 2, and the authors only mention in passing that they have tried larger K in the conclusion. It would be good to have a more thorough experimental evaluation of the trade-offs of setting K to be higher values.\n\nA few remaining questions for the authors:\n* There is a parallel submission (presumably by different authors called “Residual Connections Encourage Iterative Inference”) which contains some related insights. I wonder what are the differences between the two Taylor expansions, and whether the insights of this paper could be used to help the other paper and vice versa?\n* On implementation - the authors mention using Tensorflow’s auto-differentiation. My question here is — are gradients being re-used intelligently as suggested in Section 3.1? \n* I notice that the analysis about the vanishing Hessian could be applied to most of the popular neural network architectures available now. How much of the ideas offered in this paper would then generalize to non-resnet settings?\n\n", "Dear Researcher,\n\nThank you for your response. In the revised version of the paper we have addressed your new questions as follows\n\n- What is the exact form of equation (3) when the subscripts of F and W are added back in? Please provide a formula I can copy-paste into latex\n\nThe equation (3) in page 3 in the revised paper now sums over all relevant subscripts of F and W.\n\n- What is the exact formula of WarpNet if we didn't approximate groups of 2 layers, but arbitrarily large groups of layers? latex-formula and / or Pseudo-code would be great.\n\nEquation (3) now is also valid for arbitrarily large group of layers. Also in Appendix A we spell out the formula for K=3. See Equation (9).\n\n- What is your argument / evidence for the first-order approximation being exact for a ReLU network?\n\nThe argument is that when ReLU is used, the Hessian F''(x) vanishes almost exactly as ReLU is a piecewise linear function. This is given in the first paragraph of page 3. We also added experimental evidence in Figure 2, where it shows the validation curves of the wide residual network (blue solid) and its WarpNet (green dashed) approximation lie almost exactly on top of each other.", "Dear AnonReviewer1,\n\n Thank you for your comments. We are sorry about the confusion in our discussion. We decided to remove the discussion on symmetry breaking and use the space to present a much more extensive experimental study of WarpNet on CIFAR-10 and CIFAR-100, an analysis on ImageNet, a scale-up of the warp factor from K=2 to K=3, and a comparison to data parallelism. In addition, we clarified the notations in derivatives and added a theorem and its proof (in Appendix). We have posted the new version of the paper. We do feel that the paper is much stronger than before, thanks to the reviewers’ suggestions.\"", "Dear AnonReviewer3,\n\nThank you for your support! In this revision we have included extensive experimental results on CIFAR-10, CIFAR-100, and ImageNet data sets using various settings for WarpNet, including increasing the warp factor from K=2 to K=3. We also compared our parallelization with data parallelization using mini-batches for ResNets. Please see the new experimental results in Section 4.  ", "*It is also never precisely spelled out what the final theoretical guarantee is (preferably the authors would do this in the form of a proposition or theorem).\n\nWe have made our statements much clearer with Theorem 1 in Section 2, which shows the almost exact formula (without the epsilon^2 terms using ReLU). We provide two proofs of this theorem in appendix A. \n\nThe first is a brute force approach, where we show that the number of terms with the same powers in F and F' after consecutive Taylor expansions satisfies the same recursion relation as the binomial coefficients. The also show explicitly that the number of terms with the same power in F and F' for x3 and x4 are binomial coefficients. Then, by induction, it follows that the number of terms of power k across K residual units is the binomial coefficient (K, k).\n\nThe second proof starts by noting that each iteration of the Taylor series expansion can be described as a Bernoulli process with parameters K and p=0.5. It follows that any term in the Taylor series expansion is a realization of the Bernoulli process. The underlying Bernoulli variables correspond to whether a term gets an addition power of F' in the Taylor series expansion. Then the total number of terms with power k is the binomial coefficient (K,k).\n\n* There is a parallel submission (presumably by different authors called “Residual Connections Encourage Iterative Inference”) which contains some related insights. I wonder what are the differences between the two Taylor expansions, and whether the insights of this paper could be used to help the other paper and vice versa?\n\nThanks for bringing this up. There are indeed differences in the two Taylor series expansions. We expand the outputs of residual units across layers where they expand the loss function. The two Taylor series behaves completely differently. Our Taylor expansion has almost vanishing second and higher order terms with ReLU non-linearity. However this does not appear to be guaranteed in their expansion.\n\n* On implementation - the authors mention using Tensorflow’s auto-differentiation. My question here is — are gradients being re-used intelligently as suggested in Section 3.1? \n\nOur experiments did not intelligently re-use the gradients as mentioned. It is a theoretical possibility that deserves to be looked at for possibly further speeding up WarpNet in a future investigation. We added the mentioning of this in the second paragraph of Section 4. \n\n* I notice that the analysis about the vanishing Hessian could be applied to most of the popular neural network architectures available now. How much of the ideas offered in this paper would then generalize to non-resnet settings?\n\nAt least analytically, if ReLU is used, we can Taylor expand and just keep the first order terms in the series expansion. For ResNets, the Taylor expansion parameter is F. In general, however, the Taylor expansion parameter may be something else. \n", "Dear AnonReviewer2,\n\nThank you for your constructive comments. We have revised the paper accordingly, considering all your suggestions. In particular, we conducted a much more extensive experimental study of WarpNet, clarified the notations in derivatives and added a theorem and its proof. Below we describe how we addressed each of your points:\n\n*Throughout, the authors write out equations as if the weights in all layers are equal, but this is confusing even if the authors say that this is (not) what they are doing, since their explanation is not very clear. The confusion is particularly acute in places where derivatives are taken, because the derivatives continue to be taken as if the weights were untied, but then written as if they happened to be the same.\n\nWe have added a general formula for the first order Taylor series for all K in Equation (3) to clarify how the equation should be read. The exponential gradient scaling result can then be derived by differentiating this expression with respect to x. The only non vanishing term in the differentiation comes from the right-most factor, F, since all F'' = 0 almost exactly when ReLU is used. Then setting all weights to be equal results in the binomial coefficient and (F') to the power of k. We have also clarified the derivation of the formula in the gradient decoupling paragraph. In addition, we have provided a proof of binomial path lengths in the appendix and we hope that this will clarify the presentation of this paper.\n\n* It would be stronger to evaluate results on a larger dataset like ILSVRC.\n\nWe have added a subsection (4.2) that compares WarpNet with ResNet on ImageNet on a few different settings. Using the ImageNet data set, we also illustrates the \"almost-exactness\" of the first order Taylor series in Figure 2, where the validation curves of the WRN-73-2 and it's approximation WarpNet-73-2 (K=2) approximation lie almost on top of each other during training.\n\n* The relative speed-up of WarpNet compared to ResNet needs to be better explained — the authors break the computation of the WarpNet onto two GPUs, but it’s not clear if they do this for the (vanilla) ResNet as well. In batch mode, the easiest way to parallelize is to have each GPU evaluate half the batch. Even in a streaming mode where images need to be evaluated one by one, there are ways to pipeline execution of the residual blocks, and I do not see any discussion of these alternatives in the paper.\n\nWe have performed experiments on ResNet with data parallelization using mini-batches. The results and discussion that compare WarpNet to ResNet with data parallelism are shown in Section 4.3. \n\n* In the experimental results, K is set to be 2, and the authors only mention in passing that they have tried larger K in the conclusion. It would be good to have a more thorough experimental evaluation of the trade-offs of setting K to be higher values.\n\nWe have performed experiments with K=3. It is a much more interesting case than K=2, as there are 8 terms in the K=3 case and we only have 4 GPUs available. We made further approximations to the Taylor series by simply omitting terms in the Warp operator. Although in this case, WarpNet is not an exact first-order approximation of the ResNet. Results for K=3 are added in Section 4.1 (Table 3). \n\n* A major part of the paper tries to make the case that there is a symmetry breaking property of the proposed model, which I am afraid I simply was not able to follow. Some of the notation is confusing here — for example, presumably the rotations refer to image level rotations rather than literally multiplying the inputs by an orthogonal matrix, which the notation suggests to be the case. \n\nSorry about the confusion. We decided to remove the discussion on symmetry breaking and use the space to show more extensive experimental results to demonstrate the effectiveness of WarpNet.\n\n", "Dear authors,\n\nThank you for your response to my comment. I apologize for the delay in getting back. I was revising my own paper and integrating reviewer requests. My response time should be much lower going forward.\n\nBefore I respond to your latest comments, please answer the following three questions below. I want to exclude the possibility of a misunderstanding.\n\n- What is the exact form of equation (3) when the subscripts of F and W are added back in? Please provide a formula I can copy-paste into latex\n- What is the exact formula of WarpNet if we didn't approximate groups of 2 layers, but arbitrarily large groups of layers? latex-formula and / or Pseudo-code would be great.\n- What is your argument / evidence for the first-order approximation being exact for a ReLU network?\n\nThanks,", "I am continuing the discussion under the \"Fatal mathematical flaw and weak experiments (1/2)\" thread.", "\"*No, WarpNet only works because it \"parallelizes\" only pairs of layers. If all layers were parallelized in this way, the network would perform very badly. Parallelizing pairs of layers is a relatively mild approximation and so it does not lead to catastrophic results because of the inherent robustness of neural networks.\"\n\nThe approximation is almost exact with ReLU as both Veit's and our work together have shown, up to a path length of 25. We have performed \"parallelization\" over 3 layers in various settings in the past few weeks after obtaining access to servers with 4GPUs which allowed us to scale up our experiments. We found that the predictive accuracy remains similar to that of a ResNet with the same number of parameters, and the speedup of WarpNet over ResNet is increased when K=3 compared to K=2. ReLUs are used in our experiments.\n\n\"*No! In typical architectures, the first-order approximation of a deep ResNet is not close at all to the actual ResNet. This can be verified easily by simply computing that approximation directly. You will find that outputs are very different and the error of the approximation, while probably better than chance, will be nowhere near the original trained network.\"\n\nThe first order approximation is actually almost exact for the actual ResNet if ReLUs are used. We have analytically shown that the first order Taylor series expansion agrees with Veit's Figure 6(a) for the path length distribution, and 6(b) for the gradient scaling up to path lengths of about 25. Both Veit and we employed ReLUs. The validity of the linear approximation extends beyond the effective path length (Fig 6(c) in Veit) which is at most 20. This confirms that the first order series is almost exact.\n\nWe guess that the confusion might be caused by our infrequent mentioning that ReLU is used in our analysis, and we will state that we have used ReLU non-linearities more often in our paper.\n\n\"*No! The norm of the Jacobian of the residual path is ~ 1 / sqrt(i) in the typical architecture defined above if we ignore terms pertaining to network width. It is certainly nowhere near 10^-6. The Veit paper uses the value 10^-6 only once, but in a completely different context that this author claims. This shows that the authors of this paper have not understood the Veit paper.\"\n\nThere are two (not only once) appearances of 10^-6 in Veit's paper. We took the 10^-6 number from Figure 6(b). There it shows that the gradient norm descents from about 10^-6 down to 10^-25 at path length 25. Another appearance of 10^-6 just before section 4.3 in Veit et al, we did not use that 10^-6 there. \n\n\"*The fact that ReLU has no second derivative almost surely does not mean that the first order theory is exact. It is only exact in a region around each point that is enclosed by a non-differentiable boundary. But that region is much, much smaller than the size of the residual function and therefore does not apply to the author's analysis.\"\n\nOnly a single point (the origin) in ReLU is non-differentiable. In any other region the first order term approximation is exact. As discussed before our work shows the first order approximation is almost exact with experimental validation from Veit et al.\n\n\"*'This is an important result, which suggests that to the first non-vanishing order in gradient norm, the local minima is independent of parameters in subsequent residual units.'This is not true, because the O(\\epsilon) assumption is not true.\"\n\nAs we discussed before, the second order terms vanish because of the ReLU non-linearity we used. The O(\\epsilon) number taken by the researcher is used for another purpose in another section of the paper", "Dear Anonymous Researcher,\n\nThank you for your attention to our paper. We take all your points seriously, and do not find that our paper has fatal mathematical flaws. Rather we maintain that WarpNet is a sound and effective approximation of ResNet when ReLU is used. In addition, we have performed experiments for skipping over two layers in the past few weeks after we submitted the paper, and we found further speed up than just skipping one layer while maintaining similar predictive accuracy. We will add the new results into the paper before the revision deadline. Below we address your points one by one.\n\n\"*Therefore, one of the central claims of the paper, which is that F_i and F_i' are very small, is false and this undermines the analysis presented.\"\n\nWhile it is true that ||F|/||h|| ~ 1/sqrt(i), the gradient norms ||F'|| are typically of order 10^-6 and smaller, see Figure 6(b) in Veit. Actually, the value of ||F||/||h|| has nothing to do with the validity of using the first order Taylor series to expand F (and thus the validity of the WarpNet). The purpose of the paragraph in question is an attempt to explain some observations about ResNet found in the literature. That paragraph in question only pertains to ResNet and we do not need small a ||F||/||h|| to perform the Taylor expansion on F. Therefore the subsequent sections and the validity of WarpNet are unaffected.\n\nThat being said, we thank the researcher for pointing out ||F||/||h|| ~ 1/sqrt(i). We agree with you that we should not say F is very small. We will remove this confusing and irrelevant paragraph and it will certainly make our paper easier to read. It actually better motivates the expansion of F if F is not very small.\n\n\"*(3) is not meaningful. In practical ResNets, the O(e^2) term would dominate. While it is true that individual higher-order terms are smaller, this is more than made up by their greater frequency. In the Veit et al paper that this paper cites repeatedly, it is clearly demonstrated that in a typical ResNet, terms of a certain order dominate the gradient expansion (and therefore the Taylor expansion) and that this order is significantly greater than 1.\"\n\nRegrettably, we are unable to find any form of series expansion in Veit's paper. We would appreciate it if the researcher would kindly point us to the location in Veit et al that they refer to as the \"gradient expansion\". The O(e^2) term does not dominate with ReLU non-linearities. Veit's paper confirms our analytical results.", "\"Therefore, a the first order term in the perturbation series is sufficient to give a good approximation across layers in ResNet.\"\n\nNo! In typical architectures, the first-order approximation of a deep ResNet is not close at all to the actual ResNet. This can be verified easily by simply computing that approximation directly. You will find that outputs are very different and the error of the approximation, while probably better than chance, will be nowhere near the original trained network.\n\n\"This is a good approximation, as the expected gradient norm across one residual unit was shown to be of order 10^-6\"\n\nNo! The norm of the Jacobian of the residual path is ~ 1 / sqrt(i) in the typical architecture defined above if we ignore terms pertaining to network width. It is certainly nowhere near 10^-6. The Veit paper uses the value 10^-6 only once, but in a completely different context that this author claims. This shows that the authors of this paper have not understood the Veit paper.\n\n\"Therefore all second order perturbations vanish with probability 1. The same argument applies to higher orders. The first order perturbation theory is almost surely an exact theory for ResNets with ReLU non-linearity.\"\n\nThe fact that ReLU has no second derivative almost surely does not mean that the first order theory is exact. It is only exact in a region around each point that is enclosed by a non-differentiable boundary. But that region is much, much smaller than the size of the residual function and therefore does not apply to the author's analysis. \n\n\"This is an important result, which suggests that to the first non-vanishing order in gradient norm, the local minima is independent of parameters in subsequent residual units.\"\n\nThis is not true, because the O(\\epsilon) assumption is not true.\n\n\"In principle, all layers in Resnet can be skipped and trained in parallel with the warp operator in WarpNet. As a proof of concept, we implemented a WarpNet that skips every other layer and we were able to obtain speedup.\"\n\nNo, WarpNet only works because it \"parallelizes\" only pairs of layers. If all layers were parallelized in this way, the network would perform very badly. Parallelizing pairs of layers is a relatively mild approximation and so it does not lead to catastrophic results because of the inherent robustness of neural networks.\n\nSummary\n\nThis paper makes a number of incorrect statements. If it were accepted, it would greatly confuse readers that do not have a deep understanding of ResNet and would cause net damage to the community. This paper provides few experiments that show a mild improvement that is nowhere near sufficient to carry the paper by itself.\n\nBeyond the issue discussed, this paper has several other significant weaknesses which I won't go into because I'm not an official reviewer and my time is limited. For questions / criticisms of this comment, please respond below. If the area chairs want to discuss directly, my identity is equal to that of AnonReviewer2 of the paper \"Tandem Blocks in Deep Convolutional Neural Networks\".\n\nConfidence: 5 : This reviewer is absolutely certain.", "Rating: 2/10\n\nWhile the architecture presented (WarpNet) has slight performance improvements over comparable non-WarpNets in a few experiments, the paper is filled with incorrect statements which make it a clear reject.\n\nFirst, let's look at a fairly typical residual network. Let x_{i+1} = h_i(x_i) + F_i(x_i) where h_i is the identity and F_i(x_i) = Conv(ReLU(BN(x_i))). Assume that BN / Conv do not have trainable bias and variance parameter or, equivalently, the bias parameters are equal to 0 and the variance parameters are equal to 1, which they usually are in their initialized state. If the convolution is He-initialized and the input is normalized, then it is easy to check that ||h_i(x_i) || / ||F_i(x_i)|| ~ sqrt(i). This relationship does not change greatly throughout training. If we assume that the network has at most 100 residual blocks, which is often true in practice, the value of this ratio does not exceed 10. The same holds if F_i(x_i) = Conv(ReLU(BN(Conv(ReLU(BN(x_i)))))), another popular choice. Therefore, one of the central claims of the paper, which is that F_i and F_i' are very small, is false and this undermines the analysis presented.\n\nExamples:\n\nThe paper states: \"We now show that h must be close to the identity, up to O(\\epsilon) << 1, when the output of a residual unit is similar to the input\"\n\nHowever we saw above that the output of a residual unit is usually at least 10% different from the input, so the assumption that the output of a residual unit is similar to its input up to O(\\epsilon) << 1, is almost always false.\n\n\"F_i(x_i,W*_i) ~ O(e) from empirical observations such as those by Greff et al. (2016)\"\n\nWe usually have ||F_i(x_i,W*_i)|| / ||x_i|| ~ sqrt(i), therefore F_i(x_i,W*_i) ~ O(e) usually does not hold. Also Greff et al did not observe F_i(x_i,W*_i) ~ O(e).\n\n(3) is not meaningful. In practical ResNets, the O(e^2) term would dominate. While it is true that individual higher-order terms are smaller, this is more than made up by their greater frequency. In the Veit et al paper that this paper cites repeatedly, it is clearly demonstrated that in a typical ResNet, terms of a certain order dominate the gradient expansion (and therefore the Taylor expansion) and that this order is significantly greater than 1.\n\n\n(comment continued below due to character limit.)" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rycPJEAVM", "Sy_NM8aNG", "HJ6XbgpEG", "B1DKKRcmf", "HyzlPHYNG", "rkDRpk_4M", "BJAGxsHNf", "B1DKKRcmf", "B1DKKRcmf", "B1DKKRcmf", "B1DKKRcmf", "iclr_2018_SyMvJrdaW", "iclr_2018_SyMvJrdaW", "iclr_2018_SyMvJrdaW", "BJbpaOlGM", "ryCv5QFgz", "r1NvXZ9ez", "BkrnPR5mf", "S1wxhnsef", "SJ9OwtPWM", "HymI_KDbM", "BJXpGRVWf", "BkcxXAN-f", "iclr_2018_SyMvJrdaW", "iclr_2018_SyMvJrdaW" ]
iclr_2018_HktRlUlAZ
Polar Transformer Networks
Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations. The result is a network invariant to translation and equivariant to both rotation and scale. PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier. PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling. The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network.
accepted-poster-papers
The paper proposes a new deep architecture based on polar transformation for improving rotational invariance. The proposed method is interesting and the experimental results strong classification performance on small/medium-scale datasets (e.g., rotated MNIST and its variants with added translations and clutters, ModelNet40, etc.). It will be more impressive and impactful if the proposed method can bring performance improvement on large-scale, real datasets with potentially cluttered scenes (e.g., Imagenet, Pascal VOC, MS-COCO, etc.).
train
[ "ryVw7PIVG", "rkMG9c_gf", "r1XT6wdeG", "B1aLPb5eM", "BJxiU46XG", "S15H7uNMz", "BJzqfu4fM", "H11Me_VGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I think my initial review score was a bit low. There is certainly still a lot of residual uncertainty about whether the method in its current state would work well on more serious vision problems, but:\n1) The method is conceptually novel and innovative\n2) I can see a plausible path towards real-world usage. This may require some further ideas for how to deal with multiple objects, how to learn where to focus, etc., but this paper doesn't have to solve everything at once.\nSo I recommend the paper for acceptance.", "This paper presents a new convolutional network architecture that is invariant to global translations and equivariant to rotations and scaling. The method is combination of a spatial transformer module that predicts a focal point, around which a log-polar transform is performed. The resulting log-polar image is analyzed by a conventional CNN.\n\nI find the basic idea quite compelling. Although this is not mentioned in the article, the proposed approach is quite similar to human vision in that people choose where to focus their eyes, and have an approximately log-polar sampling grid in the retina. Furthermore, dealing well with variations in scale is a long-standing and difficult problem in computer vision, and using a log-spaced sampling grid seems like a sensible approach to deal with it.\n\nOne fundamental limitation of the proposed approach is that although it is invariant to global translations, it does not have the built-in equivariance to local translations that a ConvNet has. Although we do not have data on this, I would guess that for more complex datasets like imagenet / ms coco, where a lot of variation can be reasonably well modelled by diffeomorphisms, this will result in degraded performance.\n\nThe use of the heatmap centroid as the prediction for the focal point is potentially problematic as well. It would not work if the heatmap is multimodal, e.g. when there are multiple instances in the same image or when there is a lot of clutter.\n\nThere is a minor conceptual confusion on page 4, where it is written that \"Group-convolution requires integrability over a group and identification of the appropriate measure dg. We ignore this detail as implementation requires application of the sum instead of integral.\"\nWhen approximating an integral by a sum, one should generally use quadrature weights that depend on the measure, so the measure cannot be ignored. Fortunately, in the chosen parameterization, the Haar measure is equal to the standard Lebesque measure, and so when using equally-spaced sampling points in this parameterization, the quadrature weights should be one. (Please double-check this - I'm only expressing my mathematical intuition but have not actually proven this).\n\nIt does not make sense to say that \"The above convolution requires computation of the orbit which is feasible with respect to the finite rotation group, but not for general rotation-dilations\", and then proceed to do exactly that (in canonical coordinates). Since the rotation-dilation group is 2D, just like the 2D translation group used in ConvNets, this is entirely feasible. The use of canonical coordinates is certainly a sensible choice (for the reason given above), but it does not make an infeasible computation feasible.\n\nThe authors may want to consider citing\n- Warped Convolutions: Efficient Invariance to Spatial Transformations, Henriques & Vedaldi.\nThis paper also uses a log-polar transform, but lacks the focal point prediction / STN.\nLikewise, although the paper makes a good effort to rewiev the literature on equivariance / steerability, it missed several recent works in this area:\n- Steerable CNNs, Cohen & Welling\n- Dynamic Steerable Blocks in Deep Residual Networks, Jacobsen et al.\n- Learning Steerable Filters for Rotation Equivariant CNNs, Weiler et al.\nThe last paper reports 0.71% error on MNIST-rot, which is slightly better than the PTN-CNN-B++ reported on in this paper.\n\nThe experimental results presented in this paper are quite good, but both MNIST and ModelNet40 seem like simple / toyish datasets. For reasons outlined above, I am not convinced that this approach in its current form would work very well on more complicated problems. If the authors can show that it does (either in its current form or after improving it, e.g. with multiple saccades, or other improvements) I would recommend this paper for publication.\n\n\nMinor issues & typos\n- Section 3.1, psi_gh = psi_g psi_h. I suppose you use psi for L and L', but this is not very clear.\n- L_h f = f(h^{-1}), p. 4\n- \"coordiantes\", p. 5", "This paper proposes a method to learn networks invariant to translation and equivariant to rotation and scale of arbitrary precision. The idea is to jointly train\n- a network predicting a polar origin,\n- a module transforming the image into a log-polar representation according to the predicted origin,\n- a final classifier performing the desired classification task.\nA (not too large) translation of the input image therefore does not change the log-polar representation.\nRotation and scale from the polar origin result in translation of the log-polar representation. As convolutions are translation equivariant, the final classifier becomes rotation and scale equivariant in terms of the input image. Rotation and scale can have arbitrary precision, which is novel to the best of my knowledge.\n\n(+) In my opinion, this is a simple, attractive approach to rotation and scale equivariant CNNs.\n\n(-) The evaluation, however, is quite limited. The approach is evaluated on:\n 1) several variants of MNIST. The authors introduce a new variant (SIM2MNIST), which is created by applying random similitudes to the images from MNIST. This variant is of course very well suited to the proposed method, and a bit artificial.\n 2) 3d voxel occupancy grids with a small resolution. The objects can be rotated around the z-axis, and the method is used to be equivariant to this rotation.\n\n(-) Since the method starts by predicting the polar origin, wouldn't it be possible to also predict rotation and scale? Then the input image could be rectified to a canonical orientation and scale, without needing equivariance. My intuition is that this simpler approach would work better. It should at least be evaluated.\n\nDespite these weaknesses, I think this paper should be interesting for researchers looking into equivariant CNNs.\n", "The authors introduce the Polar Transformer, a special case of the Spatial Transformer (Jaderberg et al. 2015) that achieves rotation and scale equivariance by using a log-polar sampling grid. The paper is very well written, easy to follow and substantiates its claims convincingly on variants of MNIST. A weakness of the paper is that it does not attempt to solve a real-world problem. However, I think because it is a conceptually novel and potentially very influential idea, it is a valuable contribution as it stands.\n\nIssues:\n\n- The clutter in SIM2MNIST is so small that predicting the polar origin is essentially trivially solved by a low-pass filter. Although this criticism also applies to most previous work using ‘cluttered’ variants of MNIST, I still think it needs to be considered. What happens if predicting the polar origin is not trivial and prone to errors? These presumably lead to catastrophic failure of the post-transformer network, which is likely to be a problem in any real-world scenario.\n\n- I’m not sure if Section 5.5 strengthens the paper. Unlike the rest of the paper, it feels very ‘quick & dirty’ and not very principled. It doesn’t live up to the promise of rotation and scale equivariance in 3D. If I understand it correctly, it’s simply a polar transformer in (x,y) with z maintained as a linear axis and assumed to be parallel to the axis of rotation. This means that the promise of rotation and scale equivariance holds up only along (x,y). I guess it’s not possible to build full 3D rotation/scale equivariance with the authors’ approach (spherical coordinates probably don’t do the job), but at least the scale equivariance could presumably have been achieved by using log-spaced samples along z and predicting the origin in 3D. So instead of showing a quick ‘hack’, I would have preferred an honest discussion of the limitations and maybe a sketch of a path forward even if no implemented solution is provided.\n", "- Included Appendix C, with experiments on the Street View House Numbers dataset (SVHN),\n- fixed/clarified math issues raised by AnonReviewer3,\n- included citations suggested by AnonReviewer3,\n- included clarification in section 5.5, to address issues raised by AnonReviewer1, and\n- rearranged paragraphs and removed redundant sentences to maintain number of pages.\n\n", "Thank you for the review.\n\n> (-) The evaluation, however, is quite limited.\n\nPlease check Appendix C for the newly included results on the Street\nView House Numbers dataset (SVHN), which shows that our method is also\napplicable to real-world RGB images. We show superior performance\nthan the baselines when perturbations are present.\n\n> (-) Since the method starts by predicting the polar\n> origin, wouldn't it be possible to also predict rotation\n> and scale? Then the input image could be rectified to a\n> canonical orientation and scale, without needing\n> equivariance. My intuition is that this simpler approach\n> would work better. It should at least be evaluated.\n\nYour suggestion seems to be what is done in the Spatial Transformer\nNetworks (STN) (Jaderberg et al). Our experiments show that\nregressing scale and rotation angle is a hard problem, requiring more\ndata and larger networks; on the other hand, learning the coordinates\nof a single point as a heatmap centroid is easier. We show direct\ncomparison of our method and the STN on tables 1 and 2. The advantage\nof our method is significant, specially with small number of samples\n(rotated MNIST, SIM2MNIST) and large perturbations (SIM2MNIST).\n", "Thank you for the review and insightful comments.\n\n> One fundamental limitation of the proposed approach is\n> that although it is invariant to global translations, it\n> does not have the built-in equivariance to local\n> translations that a ConvNet has. Although we do not have\n> data on this, I would guess that for more complex datasets\n> like imagenet / ms coco, where a lot of variation can be\n> reasonably well modelled by diffeomorphisms, this will\n> result in degraded performance.\n\nWe trade-off local translation equivariance for global roto-dilation\nequivariance. It is likely that for imagenet/coco local translation\nequivariance is more important, but that may not be the case when\nglobal rotations and large scale variance is present. We show that\nthe trade-off favors roto-dilation equivariance for the rotated MNIST,\nModelNet40, and rotated SVHN (included in the appendix during the\nreview period); a more interesting real life example where\nroto-dilation equivariance could be beneficial is object recognition\nin satellite images.\n\n> The use of the heatmap centroid as the prediction for the\n> focal point is potentially problematic as well. It would\n> not work if the heatmap is multimodal, e.g. when there are\n> multiple instances in the same image or when there is a\n> lot of clutter.\n\nOur method assumes that there is a single correct instance per image,\nso the multiple instance case could indeed be a problem. Note that in\nthe newly included SVHN results there are often multiple instances\npresent (see figure 6); however, the correct label is the one of the\ncentral digit and our origin predictor learns to use that. If we\nassume that multiple instances may be present in different positions,\nwith no central prior, we need to treat the problem as multiple object\ndetection. Since the origin predictor is fully convolutional, we\nbelieve we could train our model on single objects and test it with\nmultiple objects, with some sort of test-time non-maximum-suppression\non the final heatmap. It is likely that a soft argmax (perhaps with a\nloss term enforcing concentration) should be used instead of computing\nthe centroid. We could also pre-train the origin predictor with\nobject center supervision, and then fine-tune it end-to-end (since we\nhave shown that the object center is not necessarily the best origin).\nNote that we have not tried this approach yet, though we did\nexperiment with the soft argmax and concentration loss in the\nsingle-instance setting, and found no performance difference.\n\n> There is a minor conceptual confusion on page 4, where it\n> is written that \"Group-convolution requires integrability\n> over a group and identification of the appropriate measure\n> dg. We ignore this detail as implementation requires\n> application of the sum instead of integral.\" When\n> approximating an integral by a sum, one should generally\n> use quadrature weights that depend on the measure, so the\n> measure cannot be ignored.\n\nThis sentence is incorrect and was removed: \"We ignore this detail as\nimplementation requires application of the sum instead of integral.\",\nthanks for bringing it to our attention. Equation (8) is determined\nto be true by applying the definition of the Haar measure for the\ndilated-rotation group and a change of coordinates to log-polar,\nshowing that the quadrature weights are indeed one. We have updated\nthe text accordingly.\n\n\n> It does not make sense to say that \"The above convolution\n> requires computation of the orbit which is feasible with\n> respect to the finite rotation group, but not for general\n> rotation-dilations\", and then proceed to do exactly that\n> (in canonical coordinates). Since the rotation-dilation\n> group is 2D, just like the 2D translation group used in\n> ConvNets, this is entirely feasible. The use of canonical\n> coordinates is certainly a sensible choice (for the reason\n> given above), but it does not make an infeasible\n> computation feasible.\n\nTrue, implying that the computation is 'infeasible' is wrong. We have\nupdated the text.\n\n> The authors may want to consider citing (..)\n\nThanks for pointing these out, we have updated our citations. Note\nthat Weiler et al. only appeared on 11/21, almost a month after the\nsubmission deadline, and that we did cite Henriques & Vedaldi,\nalthough mistakenly not in \"Related Work\" (already fixed).\n\n> For reasons outlined above, I am not convinced that this\n> approach in its current form would work very well on more\n> complicated problems. If the authors can show that it does\n> (either in its current form or after improving it,\n> e.g. with multiple saccades, or other improvements) I\n> would recommend this paper for publication.\n\nPlease check Appendix C for the newly included results on the Street\nView House Numbers dataset (SVHN), which shows that our method is also\napplicable to real-world RGB images. We show superior performance\nthan the baselines when perturbations are present.", "Thank you for the review.\n\n> A weakness of the paper is that it does not attempt to\n> solve a real-world problem.\n\nPlease check Appendix C for the newly included results on the Street\nView House Numbers dataset (SVHN), which shows that our method is also\napplicable to real-world RGB images. We show superior performance\nthan the baselines when perturbations are present.\n\n> The clutter in SIM2MNIST is so small that predicting the\n> polar origin is essentially trivially solved by a low-pass\n> filter. Although this criticism also applies to most\n> previous work using ‘cluttered’ variants of MNIST, I still\n> think it needs to be considered. What happens if\n> predicting the polar origin is not trivial and prone to\n> errors? These presumably lead to catastrophic failure of\n> the post-transformer network, which is likely to be a\n> problem in any real-world scenario.\n\nWhile we agree that the amount of clutter could be overcome by\nhand-designed methods, we argue that learning the origin in an\nend-to-end fashion is advantageous since, in this case, the origin is\nlearned precisely for classification of the log-polar representation.\nThis is quantified in Table 1, which compares our method (PTN), with\nthe Polar CNN (PCNN), which fixes the origin at the image center. The\nresults show that even though the digits on the rotated MNIST are\ncentered, the learned origin results in significant improvements.\n\nIn more challenging scenarios, it is likely that a deeper origin\npredictor network or more sophisticated object detection model would\nbe necessary. For example, we could pre-train the origin predictor\nwith object center supervision, and then fine-tune it end-to-end. For\nthe newly included SVHN experiments we used a deeper residual origin\npredictor, but no pre-training was necessary.\n\n> I’m not sure if Section 5.5 strengthens the paper. Unlike\n> the rest of the paper, it feels very ‘quick & dirty’ and\n> not very principled. It doesn’t live up to the promise of\n> rotation and scale equivariance in 3D. If I understand it\n> correctly, it’s simply a polar transformer in (x,y) with z\n> maintained as a linear axis and assumed to be parallel to\n> the axis of rotation. This means that the promise of\n> rotation and scale equivariance holds up only along\n> (x,y). I guess it’s not possible to build full 3D\n> rotation/scale equivariance with the authors’ approach\n> (spherical coordinates probably don’t do the job), but at\n> least the scale equivariance could presumably have been\n> achieved by using log-spaced samples along z and\n> predicting the origin in 3D. So instead of showing a quick\n> ‘hack’, I would have preferred an honest discussion of the\n> limitations and maybe a sketch of a path forward even if\n> no implemented solution is provided.\n\nYour understanding is correct. The purpose of section 5.5 is to show\nthat our method is applicable to a more challenging problem, from a\ncompletely different domain. Even though our implementation may be\nconsidered a hack (applying channel-wise polar transforms), the\nconcept of using cylindrical coordinates for azimuthal rotation\nequivariance is solid, and so is learning axis of the transform. This\nis the direct extension of the PTN to 3D.\n\nAs you mentioned, it is not possible to achieve full SO(3)\nequivariance using cylindrical or spherical coordinates. Hence, we\naim for equivariance to rotations around axes parallel to z. We\nconsider this a reasonable assumption, which is equivalent to assuming\nsensors parallel to the ground, or known gravity direction in robotics\napplications, for example. Moreover, the vast majority of results on\nModelNet40 are with azimuthal rotation only.\n\nIt is indeed the case that scale equivariance could be achieved by\nlog-spaced samples along z. It could also be achieved by using\nspherical coordinates. We experimented with those but neither improve\nperformance on ModelNet40, since the scale variability is negligible\non it. We included these considerations in the text of section 5.5." ]
[ -1, 7, 7, 8, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1 ]
[ "BJzqfu4fM", "iclr_2018_HktRlUlAZ", "iclr_2018_HktRlUlAZ", "iclr_2018_HktRlUlAZ", "iclr_2018_HktRlUlAZ", "r1XT6wdeG", "rkMG9c_gf", "B1aLPb5eM" ]
iclr_2018_H1VGkIxRZ
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95%.
accepted-poster-papers
The reviewers agree that the method is simple, the results are quite good, and the paper is well written. The issues the reviewers brought up have been adequately addressed. There is a slight concern about novelty, however the approach will likely be quite useful in practice.
train
[ "r1KVjuSlf", "By__JgYef", "By0tIonxf", "SJDGWFiQG", "ryYGAGoXf", "r1OWfeiXf", "ryfs-roff", "ryPuWHofM", "H1E4-rszf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "author" ]
[ "\n-----UPDATE------\n\nThe authors addressed my concerns satisfactorily. Given this and the other reviews I have bumped up my score from a 5 to a 6.\n\n----------------------\n\n\nThis paper introduces two modifications that allow neural networks to be better at distinguishing between in- and out- of distribution examples: (i) adding a high temperature to the softmax, and (ii) adding adversarial perturbations to the inputs. This is a novel use of existing methods.\n\nSome roughly chronological comments follow:\n\nIn the abstract you don't mention that the result given is when CIFAR-10 is mixed with TinyImageNet.\n\nThe paper is quite well written aside from some grammatical issues. In particular, articles are frequently missing from nouns. Some sentences need rewriting (e.g. in 4.1 \"which is as well used by Hendrycks...\", in 5.2 \"performance becomes unchanged\").\n\n It is perhaps slightly unnecessary to give a name to your approach (ODIN) but in a world where there are hundreds of different kinds of GANs you could be forgiven.\n\nI'm not convinced that the performance of the network for in-distribution images is unchanged, as this would require you to be able to isolate 100% of the in-distribution images. I'm curious as to what would happen to the overall accuracy if you ignored the results for in-distribution images that appear to be out-of-distribution (e.g. by simply counting them as incorrect classifications). Would there be a correlation between difficult-to-classify images, and those that don't appear to be in distribution?\n\nWhen you describe the method it relies on a threshold delta which does not appear to be explicitly mentioned again.\n\nIn terms of experimentation it would be interesting to see the reciprocal of the results between two datasets. For instance, how would a network trained on TinyImageNet cope with out-of-distribution images from CIFAR 10?\n\nSection 4.5 felt out of place, as to me, the discussion section flowed more naturally from the experimental results. This may just be a matter of taste.\n\nI did like the observations in 5.1 about class deviation, although then, what would happen if the out-of-distribution dataset had a similar class distribution to the in-distribution one? (This is in part, addressed in the CIFAR80 20 experiments in the appendices).\n\nThis appears to be a borderline paper, as I am concerned that the method isn't sufficiently novel (although it is a novel use of existing methods).\n\nPros:\n- Baseline performance is exceeded by a large margin\n- Novel use of adversarial perturbation and temperature\n- Interesting analysis\n\nCons:\n- Doesn't introduce and novel methods of its own\n- Could do with additional experiments (as mentioned above)\n- Minor grammatical errors\n", "The paper proposes a new method for detecting out of distribution samples. The core idea is two fold: when passing a new image through the (already trained) classifier, first preprocess the image by adding a small perturbation to the image pushing it closer to the highest softmax output and second, add a temperature to the softmax. Then, a simple decision is made based on the output of the softmax of the perturbed image - if it is able some threshold then the image is considered in-distribution otherwise out-distribution.\n\nThis paper is well written, easy to understand and presents a simple and apparently effective method of detecting out of distribution samples. The authors evaluate on cifar-10/100 and several out of distribution datasets and this method outperforms the baseline by significant margins. They also examine the effects of the temperature and step size of the perturbation. \n \nMy only concern is that the parameter delta (threshold used to determine in/out distribution) is not discussed much. They seem to optimize over this parameter, but this requires access to the out of distribution set prior to the final evaluation. Could the authors comment on how sensitive the method is to this parameter? How much of the out of distribution dataset is used to determine this value, and what are the effects of this size during tuning? What happens if you set the threshold using one out of distribution dataset and then evaluate on a different one? This seems to be the central part missing to this paper and if the authors are able to address it satisfactorily I will increase my score. ", "Detecting out of distribution examples is important since it lets you know when neural network predictions might be garbage. The paper addresses this problem with a method inspired by adversarial training, and shows significant improvement over best known method, previously published in ICLR 2017.\n\nPrevious method used at the distribution of softmax scores as the measure. Highly peaked -> confidence, spread out -> out of distribution. The authors notice that in-distribution examples are also examples where it's easy to drive the confidence up with a small step. The small step is in the direction of gradient when top class activation is taken as the objective. This is also the gradient used to determine influence of predictors, and it's the gradient term used for adversarial training \"fast gradient sign\" method.\n\nTheir experiments show improvement across the board using DenseNet on collection of small size dataset (tiny imagenet, cifar, lsun). For instance at 95% threshold (detect 95% of out of distribution examples), their error rate goes down from 34.7% for the best known method, to 4.3% which is significant enough to prefer their method to the previous work.\n\n", "Thank you for your clarification, I had missed the discussion in 5.2 and 5.3.", "Thank you for your comments. The analysis of the effects of temperature scaling can be found in Section 5.2, where we provide some insight into why changing T and delta can improve the detection performance.\n\nHere we provide some additional explanation. Suppose we consider two images where the difference between the largest output and the average of the rest of the outputs (denoted by U_1 in Section 2) is only slightly larger for the in-distribution image than the out-of-distribution image. In that same section, we have denoted the variance in the output by U_2, and have used Taylor's series to suggest that the soft-max score of the largest output is then determined by U_1 divided by T and U_2 divided by T^2. We also argue and provide empirical evidence to show that U_2 is larger for in-distribution images than out of distribution images. Thus, the soft-max score of the largest output of the in-distribution image can be smaller than the soft-max score of the largest output of the out-of-distribution image. To remedy this situation, Taylor's series suggests that increasing T will reduce the impact of U_2 and U_1 will dominate. Now, as we change T, we have to correspondingly change delta (since a larger T pushes all the soft-max scores towards 1/N) to be able to detect an in-distribution and an out-of-distribution image.", "Detecting outliers by perturbing the input is intuitively clear (and very nice!). Near inliers, the model has relatively low curvature due to strong training signal. Thus, anti-adversarial perturbation of an inlier is likely to increase the probability of the dominant class. On the other hand, the model may change arbitrarily near outliers. Hence, anti-adversarial perturbations are likely to produce less dependable effects there.\n\nHowever, I do not understand the advantage of simultaneously fitting the softmax temperature (T) *and* the softmax output threshold \\delta. It appears as these two parameters should cancel out each other, and yet Fig.3 suggests that somehow T=1000 is better than T=1 when \\delta is set for TPR=95%. Clarifying the interaction between \\delta and T might improve the paper.\n", "We thank the reviewer for the useful feedback. We address each point raised in detail below.\n\nR1: In the abstract you don't mention that the result given is when CIFAR-10 is mixed with TinyImageNet.\n\nWe have revised the last sentence of the abstract: “For example,ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95%.”\n\nR1: grammatical issues and some sentences need rewriting.\nWe have rewritten those sentences, and others, in the revised version. \n\nR1: I'm not convinced that the performance of the network for in-distribution images is unchanged. \n\nYes, the overall accuracy would have been changed if we ignored the results for in-distribution images that appear to be out-of-distribution.However, we meant to say that our method does not change the label predictions for in-distribution images, since one can always use the original image and pass it through the original neural network. We have replaced the word “performance” with “predictions” to avoid confusion.\n\nR1: Would there be a correlation between difficult-to-classify images, and those that don't appear to be in distribution?\n\nWe provide empirical results on the correlation between difficult-to-classify images and difficult-to-detect images (Figure 16). We can observe that the images that are difficult to detect tend to be the images that are difficult to classify (e.g., DenseNet can only achieve around 50% test accuracy on the images having softmax scores below the threshold corresponding to 99% TPR, while being able to achieve around 95.2% accuracy on the overall image set).\n\nR1: When you describe the method it relies on a threshold delta which does not appear to be explicitly mentioned again.\n\nWe have extensively studied the effect of $\\delta$ and have provided additional results in Appendix H. Also, as mentioned in the response to Reviewer 2, we no longer optimize over delta.\n\nR1: The reciprocal of the results between two datasets. \n\nWe provide the reciprocal of the results between CIFAR-10 and CIFAR-100 in Appendix I. DenseNet can achieve 47.2% FPR at TPR 95% when the CIFAR-10 dataset is the in-distribution dataset and CIFAR-100 dataset is the out-of-distribution dataset, while achieving 81.4% FPR at TPR 95% when the CIFAR-100 dataset is the in-distribution and CIFAR-10 dataset is the out-of-distribution dataset. \n\nR1: What would happen if the out-of-distribution dataset had a similar class distribution to the in-distribution one? \n\nWe provide additional results in Appendix I (Figure 17), where we show the outputs of DenseNet on thirty classes for an image of apple from CIFAR-80 (in-distribution) and an image of red pepper from CIFAR-20 (out-distribution). We can observe that when the out-of-distribution images share a few common features with the in-distribution images (e.g., the image of apple is quite similar to the image of red pepper), the output distribution of the neural networks for the out-of-distribution images are sometimes similar to the output distribution for the in-distribution images. \n\nR1: The method isn't sufficiently novel (although it is a novel use of existing methods).\n\nOur proposed method is inspired by the existing methods used in other tasks (temperature scaling used for distilling the knowledge in neural networks, Hinton et al., 2015, and adding small perturbations used for generating adversarial examples, Goodfellow et al. 2015). What is novel is the way in which we use perturbation: we do exactly the opposite of what Goodfellow et al. 2015 do; instead of adding, we actually subtract the perturbation suggested them. The fact that this, along with the temperature scaling, improves the out-of-distribution detection performance is surprising and novel. Further, our work also has merits in providing extensive experimental analysis and theoretical insights, and justifying the novel use case of these techniques for out-of-distribution image detection.\n", "We thank Reviewer 2 for the constructive and encouraging feedback!\n\nTo address your concern about delta, we are no longer optimizing with respect to this parameter. We tune the temperature (T), perturbation magnitude ($\\epsilon$) on an out-of-distribution image set (for a given in-distribution image set) and setting $\\delta$ to the threshold corresponding to the 95% TPR. Our experiments in Appendix H appears to indicate that the choice of the out-of-distribution image set used to tune the parameters does not matter very much. Our method shows superior performance compared to the state-of-the-art whether we use Tiny-ImageNet (cropped) or Tiny-ImageNet (resized) or LSUN (resized) or iSUN (resized) or Gaussian noise or Uniform noise as the out-of-distribution dataset during the parameter tuning process. We also note that, while we may use one of these datasets during the tuning process, the testing is performed against other out-of-distribution dataset as well.\n\nFollowing the suggestions, we extensively studied the effect of $\\delta$ and thereafter summarize our findings below.\nHow sensitive the method is to the threshold?\nIn Figure 13, we show how the thresholds affect FPR and TPR, where we can observe that the threshold corresponding to 95% TPR can produce small FPRs on all out-of-distribution datasets.\n\t\t\t\n(2) How much of the out of distribution dataset is used to determine this value, and what are the effects of this size during tuning?\t\t\t\t\nIn the main results reported in Table 2, we held out 1,000 images to tune the parameters and evaluated on the remaining 9,000 images. To further understand the effect of the tuning set size, we show in Figure 15 the detection performance as we vary the tuning set size, ranging from 200 to 2000. We evaluate the detection performance on the remaining 8,000 images. In general we found the performance tends to stabilize as the tuning set size varies.. \n\n(3) How does delta generalize across datasets? \nIn addition to the observation in Figure 13 (a) and (b) that the effect of $delta$ is quite similar across datasets, we further conducted experiments as suggested by the reviewer. Specifically, we set the threshold using one out of distribution dataset and then evaluate on a different one. All the results can be found in Appendix H (Figure 14). We observe that the parameters tuned on different out-of-distribution natural image sets have quite similar detection performance.\n", "Thank you for the encouraging feedback on the paper." ]
[ 6, 6, 9, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1VGkIxRZ", "iclr_2018_H1VGkIxRZ", "iclr_2018_H1VGkIxRZ", "ryYGAGoXf", "r1OWfeiXf", "iclr_2018_H1VGkIxRZ", "r1KVjuSlf", "By__JgYef", "By0tIonxf" ]
iclr_2018_Skj8Kag0Z
Stabilizing Adversarial Nets with Prediction Methods
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.
accepted-poster-papers
This paper provides a simple technique for stabilizing GAN training, and works over a variety of GAN models. One of the reviewers expressed concerns with the value of the theory. I think that it would be worth emphasizing that similar arguments could be made for alternating gradient descent, and simultaneous gradient descent. In this case, if possible, it would be good to highlight how the convergence of the prediction method approach differs from the alternating descent approach. Otherwise, highlight that this theory simply shows that the prediction method is not a completely crazy idea (in that it doesn't break existing theory). Practically, I think the experiments are sufficiently interesting to show that this approach has promise. I don't see the updated results for Stacked GAN for a fixed set of epochs (20 and 40 at different learning rates). Perhaps put this below Table 1.
train
[ "HyUTUNfHz", "SyaZlQnEG", "HJR6IerZf", "rJTFwew4M", "rk7Fq_IEM", "SkpeoDLEG", "rJvkqPIVz", "ry-q9ZOlf", "HJgHrDugM", "SyyOtPi7G", "rkou2HjXf", "BJMVz7qQG", "Bk1l-xfmf", "HkUSllGXM", "HyXpJeMQz", "HkOZkgzQf", "HJUBAbUbf", "SyMkklU-G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "public", "public" ]
[ "Dear Reviewer,\n\nWe have tried addressing all your concerns in our latest response, please let us know if you still have any remaining concerns ? \n", "> Rather, I'm saying that other algorithms have similar theoretical guarantees\nCould the reviewer be more specific on which algorithms have similar theoretical guarantees with ours? We believe we have clearly distinguish our analysis from the references mentioned before. We would like to emphasize theoretical results (especially upper bound) only provides worst-case guarantee. The empirical performance may vary for different algorithms under same guarantees. Again, we agree GANs may not satisfy our assumptions that have been widely used in GAN optimization, but the analysis is not unnecessary.\n\n> the proof of Theorem 1 seems like it would apply to simultaneous gradient descent\nThe main purpose of theorem 1 is to show the prediction method converges in a convex-concave setting. It is correct that similar analysis can be applied to general alternating gradients and simultaneous gradients without the prediction step. However, we are unaware of previous convergence analysis for alternative gradients with prediction step except for the (non-stochastic) bilinear problems discussed in related work. \n\n> Why should I expect it to use more significantly more RAM?\nIn most current implementation frameworks, we need to store weights of generator, weights of discriminator, gradients of generator, and gradients of discriminator, and all the four variables need to be stored in RAM (GPU memory) for simultaneous gradient descent. However, only one of the gradients (either generator or discriminator) needs to be stored in GPU anytime for alternate updates. It can make a big difference for training large networks on GPU with limited memory.\n\n>> prediction makes really difficult problems really easy\nThe reviewer is simply nitpicking the part of our response. In our earlier response we have clearly quoted, \n\n“The purpose of the experiments is not to show that we can train things that are *impossible* to train via other methods (indeed, almost anything is possible if you tune the hyperparameters and network architecture enough), but rather that prediction makes really difficult problems really easy.” \n\nThis do suggest that the models considered in our work can be solved by various methods. However, each of these models requires different tricks to actually make them work (For more details please refer section 3.2). In our work, three different GAN models were considered. For each of these models, single method i.e., prediction has been shown to work equally well for the default setting and remains stable for wide range of hyper-parameters. Moreover, it is not clear whether the tricks mentioned in Improved WGAN paper works well when applied to other GAN models or loss functions.\n\n> Your measure of 'really difficult' is behind the GAN literature in an empirical sense.\nCould the reviewer be more specific ? Pointer to any references ?\n\nRegarding Imagenet:\nThe reported variance was the outcome of the inception score code. In our latest revision, the updated figure is now an average over five different instances.\n\nWe also thank reviewer for suggestions on improving our paper presentation.\n\n\n", "NOTE:\nI'm very willing to change my recommendation if I turn out to be wrong \nabout the issues I'm addressing and if certain parts of the experiments are fixed.\n\nHaving said that, I do (think I) have some serious issues: \nboth with the experimental evaluation and with the theoretical results.\nI'm pretty sure about the experimental evaluation and less sure about the theoretical results.\n\n\nTHEORETICAL CLAIMS:\n\nThese are the complaints I'm not as sure about:\n\nTheorem 1 assumes that L is convex/concave.\nThis is not generally true for GANs.\nThat's fine and it doesn't necessarily make the statement useless, but:\n\nIf we are willing to assume that L is convex/concave, \nthen there already exist other algorithms that will provably converge\nto a saddle point (I think). [1] contains an explanation of this.\nGiven that there are other algorithms with the same theoretical guarantees,\nand that those algorithms don't magically make GANs work better, \nI am much less convinced about the value of your theorem.\n\nIn [0] they show that GANs trained with simultaneous gradient descent are locally asymptotically stable, \neven when L is not convex/concave. \nThis seems like it makes your result a lot less interesting, though perhaps I'm wrong to think this?\n\nFinally, I'm not totally sure you can show that simultaneous gradient descent won't converge \nas well under the assumptions you made.\nIf you actually can't show that, then the therom *is* useless, \nbut it's also the thing I've said that I'm the least sure about.\n\n\nEXPERIMENTAL EVALUATION:\n\nRegarding the claims of being able to train with a higher learning rate:\nI would consider this a useful contribution if it were shown that (by some measure of GAN 'goodness')\na high goodness was achieved faster because a higher learning rate was used.\nYour experiments don't support this claim presently, because you evaluate all the models at the same step.\nIn fact, it seems like both evaluated Stacked GAN models get worse performance with the higher learning rate.\nThis calls into question the usefulness of training with a higher learning rate.\nThe performance is not a huge amount worse though (based on my understanding of Inception Scores),\nso if it turns out that you could get that performance\nin 1/10th the time then that wouldn't be so bad.\n\nRegarding the experiment with Stacked GANs, the scores you report are lower than what they report [2].\nTheir reported mean score for joint training is 8.59.\nAre the baseline scores you report from an independent reproduction?\nAlso, the model they have trained uses label information. \nDoes your model use label information?\nGiven that your reported improvements are small, it would be nice to know what the proposed mechanism is by \nwhich the score is improved. \nWith a score of 7.9 and a standard deviation of 0.08, presumably none of the baseline model runs\nhad 'stability issues', so it doesn't seem like 'more stable training' can be the answer.\n\nFinally, papers making claims about fixing GAN stability should support those claims by solving problems\nwith GANs that people previously had a hard time solving (due to instability).\nI don't believe this is true of CIFAR10 (especially if you're using the class information).\nSee [3] for an example of a paper that does this by generating 128x128 Imagenet samples with a single generator.\n\nI didn't pay as much attention to the non-GAN experiments because\na) I don't have as much context for evaluating them, because they are a bit non-standard.\nb) I had a lot of issues with the GAN experiments already and I don't think the paper should be accepted unless those are addressed.\n\n\n[0] https://arxiv.org/abs/1706.04156 (Gradient Descent GAN Optimization is Locally Stable)\n\n[1] https://arxiv.org/pdf/1705.07215.pdf (On Convergence and Stability of GANs)\n\n[2] https://arxiv.org/abs/1612.04357 (Stacked GAN)\n\n[3] https://openreview.net/forum?id=B1QRgziT (Spectral Regularization for GANs)\n\nEDIT: \nAs discussed below, I have slightly raised my score. \nI would raise it more if more of my suggestions were implemented (although I'm aware that the authors don't have much (any?) time for this - and that I am partially to blame for that, since I didn't respond that quickly).\nI have also slightly raised my confidence.\nThis is because now I've had more time to think about the paper, and because the authors didn't really address a lot of my criticisms (which to me seems like evidence that some of my criticisms were correct).", "REGARDING THEORY RESPONSE:\nI'm not specifically criticizing the lack of realism in the assumptions - I agree that such a criticism would be unreasonable.\nRather, I'm saying that other algorithms have similar theoretical guarantees (proved using similar assumptions) to your algorithm, yet those guarantees don't seem to correspond in general to serious improvements in empirical performance.\nThus, I estimate that the value of your particular guarantee is low.\nIf instead it were true that all GAN training procedures proven to have property P seem to do really well in practice, and you proved that your (admittedly simple and easy to implement) algorithm had property P, I would feel differently.\n\nYou also didn't address my complaint that the proof of Theorem 1 seems like it would apply to simultaneous gradient descent (another commenter has now also made this claim).\nThis seems like an easy complaint to address - either I'm right about this or I'm wrong.\nIf I'm right, I don't see what value Theorem 1 adds (except as a sanity check), if I'm wrong, I'm happy to increase the score.\n\nFinally, I'm confused by your statements about simultaneous gradient descent.\nWhy should I expect it to use more significantly more RAM?\nPerhaps we are talking about different things when we say Simultaneous Gradient Descent (maybe there is a usage in the saddle-point optimization literature I'm not familiar with)?\n\nREGARDING EXPERIMENTAL RESPONSE:\n\n> prediction makes really difficult problems really easy\nRight - I claim that you haven't showed this.\nAll the problems you solved (admittedly with the exception of the 100 Mixture Components - but I can think of other methods that could solve this problem even more easily :)) have already been 'solved' in my estimation.\nYour measure of 'really difficult' is behind the GAN literature in an empirical sense.\nI would also claim that the experiments where you've modified the hyperparameters in a variety of ways and shown that things are generally better behaved have already been done, e.g. in the Improved WGAN paper.\nThat doesn't necessarily mean that doing them again is useless, but certainly it decreases their value.\n\n> below we also report the performance score measured at the fewer number of epochs for higher learning rates.\nI think this is much better empirical support for your method than anything else in the paper.\nI will raise my score slightly for this reason, and I would raise it even more for a version that de-emphasized the claims to have SOTA inception score and more heavily explored this benefit (I'm aware you might not have time for that).\n\nI don't really know what to make of the Imagenet experiment.\nFor one thing, you have error bars, but surely you've conducted only one instance of the experiment (presumably all of your baseline runs didn't dip and then recover exactly at epoch 5)?\n\nMISC:\n\nHere are ways that I think this paper could be improved (which you are of course free to disregard):\n\n1. Move Thm 1 to an appendix - I don't think it really does anything.\n2. Get rid of the bit about the oscillator - I don't think it's wrong per se, but the relevance is questionable.\nThe pathologies you're claiming to get rid of correspond more to divergence than to well-behaved cycling about a fixed point.\n3. Emphasize the speed-up you can get from using higher learning rates. This is a good result!\n4. Do proper imagenet experiments. I know they're a pain, but the state of the art has moved there at this point.\n5. Give a more complete story about why your method should prevent certain pathologies, and maybe study more deeply the nature of those pathologies? (I don't have a great suggestion about how to do this because I don't know what the story is!)", "Dear ACs and Reviewers, \n\nDo you have any questions? \nAre there any remaining concerns?\n\nBest regards, \nThe Authors", "Thanks for trying it out, please let me know if you need any help implementing it ?", "I would like to just add that I am working on a lighting related project which uses Adversarial network architecture. Because my architecture is bit non-standard GAN architecture, I was finding very difficult to train my network. But after implementing this simple idea (it look hardly 2 hours for me to code), at-least I am able to train my network and get some reasonable results (although far from what I wanted).", "This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step. \n\nThis is a clever and useful idea, and the paper is very well written. The proposed method is very clearly motivated, both intuitively and mathematically, and the authors also provide theoretical guarantees on its convergence behavior. I particularly liked the analogy with the damped harmonic oscillator. \n\nThe experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique. I believe that the method proposed in this paper will have a significant impact in the area of GAN training.\n\nI have only one minor question: in the prediction step, why not use a step size, say \n$\\bar{u}_k+1 = u_{k+1} + \\gamma_k (u_{k+1} − u_k)$, such that the \"amount of predition\" may be adjusted?\n", "This work proposes a framework for stabilizing adversarial nets using a prediction step. The prediction step is motivated by primal-dual algorithms in convex optimization where the term having both variables is bi-linear. \n\nThe authors prove a convergence result when the function is convex in one variable and concave in the other. This problem is more general than the previous one in convex optimization. Then this prediction step is applied in many recent applications in training adversarial nets and compared with state-of-the-art solvers. The better performance of this simple step is shown in most of the numerical experiments. \n\nThough this work applies one step from the convex optimization to solve a more complicated problem and obtain improved performance, there is more work to be done. Whether there is a better generalization of this prediction step? There are also other variants of primal-dual algorithms in convex optimization; can other modification including the accelerated variants be applied?", "As per the suggestion, we experimented with Imagenet dataset using AC-GAN [1] model with and without prediction. Please find the result in the supplementary material. Note that unlike the model used in spectral regularization [2] article, AC-GAN model do not use conditional BN, resnet blocks, hinge loss etc. Thus compared to [2], the reported inception score is low. We stick to AC-GAN as it is the only publicly available model which works best on Imagenet dataset.\n\n[1] https://arxiv.org/abs/1610.09585 (AC-GAN)\n[2] https://openreview.net/forum?id=B1QRgziT (Spectral Regularization for GANs)", "As per the suggestion from Reviewer4, we experimented with Imagenet dataset using AC-GAN [1] model with and without prediction. Please find the result in the supplementary material. Note that unlike the model used in spectral regularization [2] article, AC-GAN model do not use conditional BN, resnet blocks, hinge loss etc. Thus compared to [2], the reported inception score is low. We stick to AC-GAN as it is the only publicly available model which works best on Imagenet dataset.\n\n[1] https://arxiv.org/abs/1610.09585 (AC-GAN)\n[2] https://openreview.net/forum?id=B1QRgziT (Spectral Regularization for GANs)", "Hi, just wanted to point out a very relevant paper from a different community.\n\nhttps://link.springer.com/article/10.1134%2FS0965542511120050\n\nI guess it deserves to be mentioned in your paper. ", "The purpose of the experiments is not to show that we can train things that are *impossible* to train via other methods (indeed, almost anything is possible if you tune the hyperparameters and network architecture enough), but rather that prediction makes really difficult problems really easy. Compared to simple alternating gradient methods, prediction methods are more stable than other methods, work with a much wider range of hyperparameters than classical schemes, and don’t suffer from the collapse phenomenon that make it difficult to use other methods. \n \n Below, we address reviewer’s comments that seem to pertain to specific dataset and architecture:\n \nRegarding DCGAN experiments (Without using label information):\nFigure 4 uses the finely tuned learning rate and momentum parameters that come with the pre-packaged DCGAN code distribution. This figure shows that DCGAN collapses frequently; even with these fine tuned parameters it still requires a carefully chosen stopping time/epoch to avoid collapse. With prediction it does not collapse at all. The purpose of increasing the learning rate is not to show that “better” results could be had, but rather to show that prediction methods don’t require finely tuned parameters. If you have a look at the additional experiments in the appendix (page 18), we train DCGAN with a litany of different learning rate and momentum parameters. The prediction method succeeds without any collapse events in all of these cases, while non-prediction is unstable as soon as we move away from the carefully tuned parameter choices.\n\nRegarding Stacked GAN (With using label information): \n We reproduced this experiment using the Stacked Gan author’s publicly available code, but were not able to get the same inception scores for Stacked GAN as the original authors. Note the release code did not come with code for computing inception scores, and we used a well-known Tensor Flow implementation that may differ from what the original author’s used. \n We ran all the scenarios for a fixed number of epochs (200 epochs, which is default in the Stacked GAN’s released code) to ensure a fair comparison. Indeed, prediction method was able to achieve the best inception score of 8.83 at lesser epoch than 200. Having said that, as per the suggestion, below we also report the performance score measured at the fewer number of epochs for higher learning rates. The quantitative comparison based on the inception score for learning rates of 0.0005 (200/5 = 40 epochs) and 0.001 (200/10 = 20 epochs) are as follows-\n\nLearning Rate\t\t\t 0.0005 (epochs=40)\t\t 0.001 (epochs=20)\nStacked GAN (joint) 5.80 +/- 0.15 1.42 +/- 0.01\nStacked GAN (joint) + Prediction 8.10 +/- 0.10 7.79 +/- 0.07\n\nRegarding the absence of problems that are “hard” without prediction: In Figure 8 of the appendix, we solve a toy problem that is famously hard: trying to recover all of the modes in a Gaussian mixture model. The prediction method does this easily, while the method without prediction fails to capture all the modes. We also “turn the dial up” on this problem by using 100 Gaussian components in Figure 9, and the non-prediction method produces highly irregular results unless a batch size of over 6000 (which is very much larger than the number of components) is used. In contrast, the prediction method represents the distribution well for a wide range of batch sizes and learning rates.", "We agree with the reviewer that theory in this area (and in deep learning in general) often requires assumptions that don’t hold for neural networks. Nonetheless, we think it is worth taking time to explore conditions under which algorithms are guaranteed to work, because this provides a theoretical proof-of-concept, and thinking through theoretical properties of a new algorithm makes it more than just another hack. The purpose of our result is to do just that for our proposed algorithm. We don’t disagree that analysis exists for other algorithms, but we don’t think the existence of other algorithms gets us “off the hook” from thinking about the theoretical implications of our approach. \n\nThat being said, we think the reviewer is overestimating the state of the art in theory for GANs. There is currently no theoretical result that does not make strong assumptions, and many results (including those referenced by the reviewer) are quite different from (and in many ways weaker than) our own. The result in [1] shares certain assumptions with our own (convex-concave assumptions, bounded problem domain, and an ergodic measure of convergence). However, the result in [1] does not prove convergence in the usual sense, but rather that the error will decay to within an o(1) constant. In contrast, our result shows that the error decays to zero. The result in [1] also requires simultaneous gradient descent, which is not commonly used in practice (because it requires more RAM to store [extremely large] iterates and it uses a stale iterate when updating the generator and discriminator one-at-a-time). In contrast, our result concerns the commonly used alternating direction approach.\n The result in [0] shows stability using a range of assumptions that are different from (but not necessarily stronger or weaker than) our own. They require the discriminator to be a linear classifier, and make a strict concavity assumption on the loss function. They also require an assumption (called Property I) that is analogous to the “strict saddle” assumption in the saddle-point literature (see, e.g. Lee 2016, “Gradient Descent Converges to Minimizers”), which is known not to hold for general neural nets. Also, note that the result in [0] is only a local stability result (it only holds once the iterates get close to a saddle satisfying the assumptions), whereas our result is a global convergence result that holds for any initialization.\n\tFinally, we emphasize that both [0] and [1] are great works that make numerous important contributions to this field and address a host of issues beyond just convergence proofs. Our purpose here is not to make any claims that our result is “better” than theirs, but rather to state what differentiates our result from the literature, and why we felt it was worth putting it in the paper. ", "We thank the reviewer for the thoughtful comments and suggestions for future work. We think the idea of pursuing accelerated methods is particularly interesting. We have actually already done some experiments with Nesterov-type acceleration (as described for saddle-point problems by Chambolle and Pock), however it seems that the benefits of acceleration vanish when we move from deterministic to stochastic updates. We’ve made similar observations for standard convex (non-saddle) problems. That being said, we’re still interested in this direction, and are keeping our eyes peeled for possible ways forward.", "Thanks for the thoughtful comments! To answer your question: it is indeed possible to generalize this method by adding an extra stepsize parameter for the prediction step, and this is something that we have experimented with extensively. It can be shown that your proposed “gamma” parameter method is stable (under convexity assumptions) whenever gamma is between 0 and 2. However, we have not been able to find any worthwhile advantages to choosing any gamma different from 1. Choosing a smaller gamma weakens the stability benefits of prediction, and choosing a larger gamma seems to slow down convergence a bit. The latter effect can be compensated for by choosing a larger learning rate, but even in this case the method doesn’t run noticeably faster than with gamma=1. For this reason, including this “gamma” seemed like unnecessarily added complexity, so we removed it and went with a cleaner presentation.", "I just pass by and see this question. This paper is actually very close to what I have been doing (not published yet), so I would like to share some of my understandings. \n \nThe convergence rate for simultaneous gradient descent is generally slow and with the prediction step the order of the convergence rate could be increased. Details can be found in the paper in the reference list: Chen et al, \"Optimal primal-dual methods for a class of saddle point problems\". \n\nActually I think this is a very good paper (probably because I am doing very similar work). Although one of the reviewers gave a very low score, these questions are very reasonable and are what I have in mind too. I believe the authors should be aware of them too. I hope the authors can justify the contributions of the paper. \n\nFor the convex-concave assumption, this paper is not the only one that assumes that. The f-GAN paper also derived a theorem based on this assumption. The recent paper \"Training GANs with Optimism\", which is submitted to this ICLR, also derives the main algorithm based on this assumption. They are still very excellent papers. ", "It seems to me that the proof of Theorem 1 would go through even without the prediction step, i.e. it would be equally valid for, say, simultaneous gradient descent (instead of Lemma 2, one only needs to use a similar argument as in Lemma 1). In that sense, the theorem provides no support for the proposed method." ]
[ -1, -1, 4, -1, -1, -1, -1, 9, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HJR6IerZf", "rJTFwew4M", "iclr_2018_Skj8Kag0Z", "HkUSllGXM", "iclr_2018_Skj8Kag0Z", "rJvkqPIVz", "iclr_2018_Skj8Kag0Z", "iclr_2018_Skj8Kag0Z", "iclr_2018_Skj8Kag0Z", "Bk1l-xfmf", "iclr_2018_Skj8Kag0Z", "iclr_2018_Skj8Kag0Z", "HJR6IerZf", "HJR6IerZf", "HJgHrDugM", "ry-q9ZOlf", "SyMkklU-G", "iclr_2018_Skj8Kag0Z" ]
iclr_2018_rJXMpikCZ
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).
accepted-poster-papers
The authors appear to have largely addressed the concerns of the reviewers and commenters regarding related work and experiments. The results are strong, and this will likely be a useful contribution for the graph neural network literature.
train
[ "S1vzCb-bz", "BJth1UKlf", "ryFW0bhlM", "r1nI2A8fG", "HkQRsR8Mz", "Sya9jCIzG", "BJE_oR8Gf", "SynboRIMz", "Hyresurlz", "ryMGLHXxz", "Sy2YY-MlM", "SkMm8gaJM", "HyL2UVDJG", "BJg4NXZyz", "HJMkC2xyz", "H1Cechgkz", "HJk72Fekz", "S12tIMACW", "r1KPYM0Cb", "ryGdFlCCb", "B16obCO0W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public", "author", "author", "author", "public", "author", "public", "public", "public" ]
[ "This paper has proposed a new method for classifying nodes of a graph. Their method can be used in both semi-supervised scenarios where the label of some of the nodes of the same graph as the graph in training is missing (Transductive) and in the scenario that the test is on a completely new graph (Inductive).\nEach layer of the network consists of feature representations for all of the nodes in the Graph. A linear transformation is applied to all the features in one layer and the output of the layer is the weighted sum of the transformed neighbours (including the node). The attention logit between node i and its neighbour k is calculated by a one layer fully connected network on top of the concatenation of the transformed representation of node i and transformed representation of the neighbour k. They also can incorporate the multi-head attention mechanism and average/concatenate the output of each head.\n\nOriginality:\nAuthors improve upon GraphSAGE by replacing the aggregate and sampling function at each layer with an attention mechanism. However, the significance of the attention mechanism has not been studied in the experiments. For example by reporting the results when attention is turned off (1/|N_i| for every node) and only a 0-1 mask for neighbours is used. They have compared with GraphSAGE only on PPI dataset. I would change my rating if they show that the 33% gain is mainly due to the attention in compare to other hyper-parameters. [The experiments are now more informative. Thanks]\nAlso, in page 4 authors claim that GraphSAGE is limited because it samples a neighbourhood of each node and doesn't aggregate over all the neighbours in order to keep its computational footprint consistent. However, the current implementation of the proposed method is computationally equal to using all the vertices in GraphSAGE.\n\nPros:\n- Interesting combination of attention and local graph representation learning. \n- Well written paper. It conveys the idea clearly.\n- State-of-the-art results on three datasets.\n\nCons:\n- When comparing with spectral methods it would be better to mention that the depth of embedding propagation in this method is upper-bounded by the depth of the network. Therefore, limiting its adaptability to broader class of graph datasets. \n- Explaining how attention relates to previous body of work in embedding propagation and when it would be more powerful.", "The paper introduces a neural network architecture to operate on graph-structured\ndata named Graph Attention Networks.\nKey components are an attention layer and the possibility to learn how to\nweight different nodes in the neighborhood without requiring spectral decompositions\nwhich are costly to be computed.\n\nI found the paper clearly written and very well presented. I want to thank\nthe author for actively participating in the discussions and in clarifying already\nmany of the details that I was missing.\n\nAs also reported in the comments by T. Kipf I found the lack of comparison to previous\nworks on attention and on constructions of NN for graph data are missing.\nIn particular MoNet seems a more general framework, using features to compute node\nsimilarity is another way to specify the \"coordinate system\" for convolution.\nI would argue that in many cases the graph is given and that one would have\nto exploit its structure rather than the simple first order neighbors structure.\n\nI feel, in fact, that the paper deals mainly with \"localized metric-learning\" rather than\nusing the information in the graph itself. There is no\nexplicit usage of the graph beyond the selection of the local neighborhood.\nIn many ways when I first read it I though it would be a modified version of\nmemory networks (which have not been cited). Sec. 2.1 is basically describing\na way to learn a matrix W so that the attention layer produces the weights to be\nused for convolution, or the relative coordinate system, which is to me a\nmemory network like construction, where the memory is given by the neighborhood.\n\nI find the idea to use the multi-head attention very interesting, but one should\nconsider the increase in number of parameters in the experimental section.\n\nI agree that the proposed method is computationally efficient but the authors\nshould keep in mind that parallelizing across all edges involves lot of redundant\ncopies (e.g. in a distributed system) as the neighborhoods highly overlap, at\nleast for interesting graphs.\n\nThe advantage with respect to methods that try to use LSTM in this domain\nin a naive manner is clear, however the similarity function (attention) in this\nwork could be interpreted as the variable dictating the visit ordering.\n\nThe authors seem to emphasize the use of GPU as the best way to scale their work\nbut I tend to think that when nodes have varying degrees they would be highly\nunused. Main reason why they are widely used now is due to the structure in the\nrepresentation of convolutional operations.\nAlso in case of sparse data GPUs are not the best alternative.\n\nExperiments are very well described and performed, however as explained earlier\nsome comparisons are needed.\nAn interesting experiment could be to use the attention weights as adjacency\nmatrix for GCN.\n\nOverall I liked the paper and the presentation, I think it is a simple yet\neffective way of dealing with graph structure data. However, I think that in\nmany interesting cases the graph structure is relevant and cannot be used\njust to get the neighboring nodes (e.g. in social network analysis).", "This is a paper about learning vector representations for the nodes of a graph. These embeddings can be used in downstream tasks the most common of which is node classification.\n\nSeveral existing approaches have been proposed in recent years. The authors provide a fair and almost comprehensive discussion of state of the art approaches. There are a couple of exception that have already been mentioned in a comment from Thomas Kipf and Michael Bronstein. A more precise discussion of the differences between existing approaches (especially MoNets) should be a crucial addition to the paper. You provide such a comparison in your answer to Michael's comment. To me, the comparison makes sense but it also shows that the ideas presented here are less novel than they might initially seem. The proposed method introduces two forms of (simple) attention. Nothing groundbreaking here but still interesting enough and well explained. It might also be a good idea to compare your method to something like LLE (locally linear embedding). LLE also learns a weight for each of neighbors of a node and computes the embedding as a weighted average of the neighbor embeddings according to these weights. Your approach is different since it is learned end-to-end (not in two separate steps) and because it is applicable to arbitrary graphs (not just graphs where every node has exactly k neighbors as in LLE). Still, something to relate to. \n\nPlease take a look at the comment by Fabian Jansen. I think he is on to something. It seems that the attention weight (from i to j) in the end is only a normalization operation that doesn't take the embedding of node i into account. \n\nThere are two issues with the experiments.\n\nFirst, you don't report results on Pubmed because your method didn't scale. Considering that Pubmed has less than 20,000 nodes this shows a clear weakness of your approach. You write (in an answer to a comment) that it *should* be parallelizable but somehow you didn't make it work. We have to, however, evaluate the approach on what it is able to do at the moment. Having a complexity that is quadratic in the number of nodes is terrible and one of the major reasons learning with graphs has moved from kernels to neural approaches. While it is great that you acknowledge this openly as a weakness, it is currently not possible to claim that your method scales to even moderately sized graphs. \n\nSecond, the experimental set-up on the Cora and Citeseer data sets should be properly randomized. As Thomas pointed out, for graph data the variance can be quite high. For some split the method might perform really well and less well for others. In your answer titled \"Requested clarifications\" to a different comment you provide numbers randomized over 10 runs. Did you randomize the parameter initialization only or also the the train/val/test splits? If you did the latter, this seems reasonable. In Kipf et al.'s GCN paper this is what was done (not over 100 splits as some other commenter claimed. The average over 100 runs pertained to the ICA method only.) ", "We hope that the revisions we have made to the paper have properly addressed the comments of the reviewers as well as other researchers on our work - and that its overall contribution, quality and clarity is now significantly improved! We would like to thank everyone once again for their thoughtful comments on our paper.\n\nWe provide a summary of the changes made to the paper:\n\n* We have been able to implement a sparse version of the GAT layer, allowing us to execute the model on the Pubmed benchmark. We make this clear across the document, wherever we enumerated the datasets under study.\n\n* In Section 1, we have added appropriate references to relevant related work: MoNet, VAIN, neighbourhood attention, locally linear embedding (LLE) and memory networks.\n\n* In response to Fabian Jansen’s comment (and as reiterated by one of the reviewers), we have now inserted a LeakyReLU nonlinearity to our attention mechanism---representing a minimal change from the previous mechanism’s properties, while no longer having spurious weights. Section 2.1 details this change (within Equation 3 and text immediately preceding it, Figure 1, and its caption).\n\n* In Section 2.2, we no longer mention the storage limitation of our model, as we have been successful in addressing it (by implementing a sparse GAT layer). Instead, we mention the limitation of the current sparse matrix multiplication operation with respect to batching. \n\n* In Section 2.2, we incorporate many of the useful comments (from reviewers and other researchers) about the characteristics of our model: time complexity (especially, comparing it to our primary spectral baselines), the effects of multi-head attention on the parameter count, clarifying the computational/performance tradeoffs compared to GraphSAGE, detailing the relationship between GAT and MoNet, an informal assessment of the suitability of GPUs for such computations, and comments about the model’s effective “receptive field” size around each node and the computational redundancies of the model.\n\n* In Section 3.2., we state that we now compare our model with the results reported by the MoNet paper as well. \n\n* In Section 3.3., we corrected two typos (dropout p = 0.6, rather than 0.5; also, our early stopping strategy took into account both the loss and the accuracy), and noted the slight differences in architecture we used for the Pubmed dataset. We also make explicit the inductive learning experiment of a GAT model with attention turned off (as recommended by one of the reviewers).\n\n* In Section 3.4., we now report the new results of all models considered, after 100 runs for transductive tasks (for fairly comparing against the work of Kipf and Welling), and 10 runs for inductive tasks. We also provide the best 100-run transductive results we were able to obtain with a GCN model computing 64 features (with ReLU or ELU activation), and the best inductive result we were able to obtain with GraphSAGE (by only changing its architecture, and not the sampling strategy), as well as the 10-run result of the aforementioned inductive GAT model with attention turned off (as a comparison to a GCN-like model computing the same number of features). These results are now all enumerated in Tables 2 and 3, and are discussed appropriately in the main text body. The tables’ captions have been expanded to make the result presentation more clear as well.\n", "We would like to thank you for the comprehensive review! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nPrimarily, thank you for suggesting the constant-attention experiment (with 1/|Ni| coefficients)! This not only directly evaluates the significance of the attention mechanism on the inductive task, but allows for a comparison with a GCN-like inductive structure. We have successfully shown a benefit of using attention:\n\nThe Const-GAT model achieved 0.934 +- 0.006 micro-F1;\nThe GAT model achieved 0.973 +- 0.002 micro-F1.\n\nWhich demonstrates a clear positive effect of using an attention mechanism (given that all other architectural and training properties are kept fixed across the two models). These results are clearly communicated in our revised paper now (Section 3.3 introduces the experiment in the “Inductive learning” paragraph, while the results are outlined in Table 3 and discussed in Section 3.4, paragraph 4).\n\nOur intention was not to imply that our method is computationally more efficient than GraphSAGE---only that GraphSAGE’s design decisions (sampling subsets of neighbourhoods) have potentially limiting effects on its predictive power. We have rewrote bullet point 4 in Section 2.2, to hopefully communicate this better.\n\nLastly, we make explicit that the depth of our propagation is upper-bounded by network depth in Section 2.2, paragraph 2. We remark that GCN-like models suffer from the same issue, and that skip connections (or similar constructs) may be readily used to effectively increase the depth to desirable levels. The primary benefit of leveraging attention, as opposed to prior approaches to graph-structured feature aggregation, is being able to (implicitly) assign different importances to different neighbours, while simultaneously generalising to a wide range of degree distributions---these differences are stated in our paper in various locations (e.g. Section 1, paragraph 8; Section 2.2, bullet point 2).\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!\n", "Thank you very much for your detailed review! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nFabian has indeed correctly identified that half of our attention weights were spurious. We have now rectified this by applying a simple nonlinearity (the LeakyReLU) prior to normalising, and anticipated that its application would provide better performance to the model on the PPI dataset (which has a large number of training nodes). Indeed, we noticed no discernible change on Cora and Citeseer, but an increase in F1-score on PPI (now at 0.973 +- 0.002 after 10 runs; previously, as given in our reply to one of the comments below, it was 0.952 +- 0.006). The new results may be found in Tables 2 and 3.\n\nIn the meantime, we have been successful at leveraging TensorFlow’s sparse_softmax operation, and produced a sparsified version of the GAT layer. We are happy to provide results on Pubmed, and they are now given in the revised version of the paper (see Table 2 for a summary). We were able to match state-of-the-art level performance of MoNet and GCN (at 79.0 +- 0.3% after 100 runs). Similarly to the MoNet paper authors, we had to revise the GAT architecture slightly to accommodate Pubmed’s extremely small training set size (of 60 examples), and this is clearly remarked in our experimental setup (Section 3.3).\n\nFinally, quoting directly from the work of Kipf and Welling:\n\n“We trained and tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracy of 100 runs with random weight initializations.”\n\nThis implies that the splits were not randomised in the result reported by the GCN paper (specifically, the one used to compare with other baseline approaches), but only the model initialisation---and this is exactly what we do as well. We, in fact, use exactly the code provided by Thomas Kipf at https://github.com/tkipf/gcn/blob/master/gcn/utils.py#L24 to load the dataset splits.\n\nWe have added all the required references to MoNet and LLE (and many other pieces of related work) in the revised version (Section 1, paragraphs 6 and 9; also Section 2.2, bullet point 5) - thank you for pointing out LLE to us, which is an interesting and relevant piece of related work!\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!\n", "First of all, thank you very much for your thorough review, and for the variety of useful pointers within it! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nWe have now added all the references to attention-like constructions (such as MoNet and neighbourhood attention) to our related work, as well as memory networks (see Section 1, paragraphs 6 and 9; also Section 2.2, bullet point 5). We fully agree with your comments about the increase in parameter count with multi-head attention, computational redundancy, and comparative advantages of GPUs in this domain, and have explicitly added them as remarks to our model’s analysis (in Section 2.2, bullet point 1 and paragraph 2). \n\nWhile we agree that the graph structure is given in many interesting cases, in our approach we specifically sought to produce an operator explicitly capable of solving inductive problems (which appear often, e.g., in the biomedical domain, where the method needs to be able to generalise to new structures). A potential way of reconciling this when a graph structure is provided is to combine GAT-like and spectral layers in the same architecture.\n\nFurther experiments (as discussed by us in all previous comments) have also been performed and are now explicitly listed in the paper’s Results section (please see Tables 2 and 3 for a summary). We have also attempted to use the GAT coefficients as the aggregation matrix for GCNs (both in an averaged and multi-head manner)---but found that there were no clear performance changes compared to using the Laplacian.\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!", "Thank you very much for spotting this! We have now updated our method to make advantage of all the weights (by applying a simple nonlinearity to the output before normalisation), and will be sure to acknowledge you in the final version of our paper!", "In equation 3 the coefficients are calculated as a softmax. However, it appears that the first half of the weight vector \"a\" beloning to the node \"i\" under consideration drops out of the equation and is thus not used nor trained.\n\nFrom equation 3:\nalpha(i,j) = exp(a * [W*h(i)||W*h(j)]) / Sum(k) exp(a * [W*h(i)||W*h(j)])\n\nWriting vector a explicitly in two parts as a = [a(1)||a(2)]:\n\nalpha(i,j) = exp([a(1)||a(2)] * [W*h(i)||W*h(j)]) / Sum(k) exp([a(1)||a(2)] * [W*h(i)||W*h(j)])\n = exp(a(1)*W*h(i) + a(2)*W*h(j) ) / Sum(k) exp(a(1)*W*h(i) + a(2)*W*h(k) ) \n = exp(a(1)*W*h(i) ) exp(a(2)*W*h(j) ) / Sum(k) exp(a(1)*W*h(i)) exp(a(2)*W*h(k) ) \n = exp(a(2)*W*h(j) ) / Sum(k) exp(a(2)*W*h(k) ) \n\nThe a(1) part drops out", "Thank you for your comments and queries on the complexity and experimental setup!\n\nWe fully agree that 100-run performance would give the fairest comparison to the baselines. Accordingly, results after 100 runs of our model largely follow the trend of the 10-run result:\nCora: 83.0 +- 0.7 (maximum 84.3%)\nCiteseer: 72.6 +- 0.8 (maximum 74.2%)\n\nTo highlight: we have used a 1-run result (without any additional runs), rather than the best result, in the original writeup as submitted. Our N-run results already showed it is possible to achieve better single-run results than 83.3% and 74.0%, respectively. \n\nOur particular choice of attentional mechanism, a, is explicitly written out in Equation (3), clarified by the text immediately preceding this Equation, and illustrated by Figure 1 (left). It may be expressed as:\n\na(x, y) = a^T[x||y]\nwhere a is a learnable weight vector, || is concatenation, and ^T is transposition.\n\nThat is, it corresponds to a simple, linear, single-layer MLP with a single output neuron, acting on the concatenated features of the two nodes to compute the attention coefficient---largely similar to the original attention mechanism of Bahdanau et al.\n\nTheoretically, our model needs to compute the attentional coefficients e_{i, j} only across the edges of the graph, i.e. O(|E|) computations of a single-layer MLP overall, which are independent, and thus can be parallelised. This is on par with other baseline techniques (such as GCNs or Chebyshev Nets). Taking into account that we need to perform a matrix multiplication on each node's features (to transform the feature space from F to F' features), we may express the overall computational complexity of a single attention head's computations as O(|V| x F x F' + |E| x F'), where F is the input feature count, and F' the output feature count---keeping in mind that many of these computations are trivially parallelisable on a GPU.\n\nThe P(N, 2) or C(N, 2) values mentioned in your comment would correspond to a dense graph (E ~ V^2), where O(V^2) complexity is unavoidable regardless of which graph technique is selected.\n\nUnfortunately, even though the softmax computation on every node should be trivially parallelisable, we were unable to make advantage of our tensor manipulation framework to achieve this parallelisation, while retaining a favourable storage complexity (as its softmax function is optimised for same-sized vectors). This implied that we had to reuse the technique from the self-attention paper of Vaswani et al., wherein attention is computed over all pairs of nodes, with a bias value of -inf inserted into non-connected pairs of nodes. This required a storage complexity of O(V^2), and caused OOM errors on our GPU when the Pubmed dataset was provided---which is the reason for the lack of results on Pubmed. \n\nWe were, however, able to run our model on the PPI dataset, which has 3x the number of nodes and 18x the number of edges of Pubmed. We were able to do this as PPI is split into 24 disjoint graphs (with test graphs entirely unseen), allowing us to effectively batch the softmax operation. This should still demonstrate evidence of the fact our model is capable of retaining competitive performance when scaling up to larger graphs (and, perhaps more critically, that it is capable of inductive, as well as transductive, generalisation).\n\nWe thank you once again for your comments, and will be sure to include some aspects of the above discussion in a revised version of the paper!", "The proposed GAT needs to compute e_{i,j} for arbitrary i, j in the graph. Thus the total number of e_{i,j} is P(N,2) for directed graph and C(N,2) for undirected graph. When the graph size (N) increases, the computational complexity increases quickly. Can the author show the computational complexity and compare it with existing methods? Also, what is the definition of attentional mechanism \"a\" in eqn (1)?\n\nIn Kipf's GCN paper, the performance is claimed being the results of an average of 100 random initializations. However, this paper using the best performance to compare with others' average performance is not reasonable. From the last comment we are informed that the average performance for 10 runs of the same model with different random seeds are (with highlighted standard deviations): Cora: 83.0 +- 0.6 (with a maximum of 83.9%) Citeseer: 72.7 +- 0.7 (with a maximum of 74.2%). For a fair comparison with the baseline, the authors may provide the average performance for 100 runs.\n\nAlso, I am curious of the reason why the author did not show the Pubmed dataset results, which is used with Cora and Citeseer togethor in existing graph CNN works. Pubmed's graph size is much larger than the other two, so it is an important dataset to test the proposed method and compare with the baselines.", "First of all, thank you very much for your thorough comment and thoughts on the experimental setup! \n\nWe directly quoted back the baseline results originally reported, under the assumption that appropriate hyperparameter optimisation had already been performed on them. However, we have now performed further experiments on the baseline techniques, in line with some of your recommendations, and the results of this study still point to an outperformance by GAT models. We focused on the experiments that were easily runnable without significantly modifying the codebases at https://github.com/tkipf/gcn and https://github.com/williamleif/GraphSAGE. Our findings can be summarised as follows, and will be highlighted in an updated version of the paper:\n\nCora/Citeseer: We have trained the GCN and Chebyshev (K = 2 and K = 3) models, with a hidden size of 64, with ReLU and ELU activations, for 10 runs each. Note that we did not need to add an additional input linear layer (as suggested by the comment), given that the code at https://github.com/tkipf/gcn/blob/master/gcn/layers.py#L176 already does this.\n\nThe best-performing models achieved the following mean +- std results:\n\nCora: 81.5 +- 0.7 (Cheby2 ReLU)\nCiteseer: 71.0 +- 0.3 (Cheby3 ELU and GCN ReLU)\n\nThese results are still outperformed by both our model's single-run performance (as in our paper) and 10-run performance (as in our reply to a previous comment below).\n\nPPI: Firstly, we would like to note that our model actually considers three-hop neighbourhoods (rather than four), and that the GraphSAGE models feature skip connections---in fact, our model only has one skip connection in total whereas GraphSAGE has a skip connection for every aggregation layer (the concat operation in Line 5 of Algorithm 1 in https://arxiv.org/abs/1706.02216). The authors of GraphSAGE have, in fact, highlighted that this skip connection was critical to their performance gains.\n\nIn line with this, we have tested a wide range of larger GraphSAGE models with three aggregation layers, with both ReLU and ELU activations, spanning feature counts up to 1024. Specially, for the third layer we focused on feature counts of 121 and 726, as our GAT model’s final aggregation layer also acts as a classification layer, computing 6 * 121 features which are then pointwise-averaged. Some of these combinations resulted in OOM errors, with the best performing one being a GraphSAGE-LSTM model computing [512, 512, 726] features, with 128 features being used for aggregating neighbourhoods, using the ELU activation. This approach achieved a micro-F1 score of 0.648 on PPI. We have found it beneficial to let the model train for more epochs compared to the original authors' work, and were able to reach a maximal test micro-F1 score of 0.768 after doing so.\n\nThis is still outperformed by a significant margin by both our single-model result (reported in the paper) and our 10-run result (reported in a reply to a previous comment below).\n\nFinally, as pointed out by the comment, we report that, for a pre-trained GAT model on Cora, the mean attentional coefficients in the hidden layer (across all eight attention heads) are 0.275 for the self-edge and 0.185 for the neighbourhood edges.", "The experiment section is not clearly indicative of what attributed to the improvement in results. \n\nFor experiments on Cora and Citeseer datasets, the authors have used the same train/test/val split as used in Kipf&Welling, ICLR'17. Though the authors have used the same split, it is not sufficient to compare them on the original reported results from the baseline papers to indicate that the attention mechanism alone is providing improved results without analysing the \ndifferences in architecture and experiment results. The proposed model besides the proposed attention mechanism has additional learning capacity as the model introduces an additional linear layer for input projection and has more number of hidden units per layer. Kipf's GCN has reported results with hidden units set to 16 whereas the size of the attention feature size (8*8) is 64. It is not clear how much improvement does the increased hidden size provides. It would be clear if the authors report results for the GCN and Chebyshev model with and additional input linear layer and hidden size set to 64. And also report the effect of use of elu activation functions instead of RELU as previously mentioned by Thomas Kipf in his comment.\n\nSimilarly in the inductive learning task the attention feature size is 1024 whereas the max feature size for the GraphSage models are 256. The GraphSage results are reported with partial neighborhood information from 2 hop neighbors. Whereas, in this paper the authors have used skip connections (also used in GCN) and the complete neighborhood information from 4-hop neighbors. No analysis of the effect of these two components are mentioned. It is not clear how the proposed model would perform under the same setting as in GraphSage (2-hop and partial neighborhood) or how much improvement would GraphSage obtain with skip connections and 4-hop information. On a side note, as GraphSage is not efficient to work with complete neighborhood, the authors can use Kipf's implementation of GCN and Chebyshev to report results on PPI with 4-hop complete neighborhood information and skip-connection. With these additional experiments, it would be clear how much improvement does the proposed attention mechanism exactly provides.\n\nFurther, I'm surprised that attention mechanism can provide improved results especially with Cora and Citeseer where the average degree is less than 2. These two datasets are highly homophilous. It would be useful if the authors report the mean attention score for the self edge and its neighbors.\n", "Thank you for the kind feedback, the plethora of useful related work, and the queries!\n\nWe have already noted the relationship of our work to MoNets and VAIN (as given in our replies to the authors below). The work on Neighbourhood attention is also relevant, and will also be cited appropriately alongside the related work by Santoro et al. (which we already cited in the original version). Also, the improved neighbourhood attention might hold interesting future work avenues (such as introducing an edge-wise 'message passing' network whose outputs one can attend over).\n\nWe have utilised exactly the same training/validation/testing splits for Cora and Citeseer as the ones used in Kipf & Welling. This information should be already highlighted in the description of our experimental setup. In fact, for extracting the dataset we use exactly the code provided at: https://github.com/tkipf/gcn/blob/master/gcn/utils.py\n\nWe have found early on in our experiments that the properties of the ELU function are convenient for simplifying the optimisation process of our method - reducing the amount of effort invested in our hyperparameter search.", "Thank you very much for your comment, and pointing us to this work! MoNets are definitely a highly relevant piece of related work to ours, and therefore they will receive appropriate treatment and a citation in the subsequent revision of our paper.\n\nWe find that our work can indeed be reformulated as a particular case of the MoNet framework. Namely, setting the pseudo-coordinate function to be \n\nu(x, y) = f(x) || f(y)\n(where f(x) represent (potentially MLP-transformed) features of node x, and || is concatenation) \n\nand the weight function to be \n\nw_j(u) = softmax(MLP(u))\n(with the softmax performed over the entire neighbourhood of a node)\n\nwould make the patch operator similar to ours. \n\nThis could be interpreted as a way of integrating the ideas of self-attentional interfaces (such as the work of Vaswani et al.: https://arxiv.org/abs/1706.03762 ) into the patch-operator framework presented by MoNet. Specially, and in comparison to the previously specified MoNet frameworks, our model uses node features for similarity computations, rather than the node's structural properties (such as their degrees in the graph). This, in combination with using a multilayer perceptron for computing the attention coefficients, allows the network more freedom in the way it chooses to express similarities between different nodes in the graph, irrespective of the local topological properties. The addition of the softmax function ensures that these coefficients will be well-behaved (and potentially probabilistically interpretable).\n\nLastly, our work also features a few stabilising additions to the attention model (to better cope with the smaller training set sizes), such as applying dropout on the computed attention coefficients, exposing the network to a stochastically sampled neighbourhood on every iteration. Such regularisation techniques might be harder to interpret or justify when structural properties are used as pseudo-coordinates, as stochastically dropping neighbours changes e.g. the node degrees.\n\nTo avoid any potential confusion for other readers of this discussion, we would like to also highlight that the arXiv link for MoNets that we referred to is: https://arxiv.org/pdf/1611.08402.pdf ", "Thank you for the positive feedback, as well as bringing your paper to our attention! We have found it to be very interesting related work, and will be sure to cite it in a subsequent version of our paper (most likely alongside our existing citation of the work of Santoro et al.: https://arxiv.org/abs/1706.01427 ). We highlight a few comparisons between our approaches that are worth mentioning below.\n\nWe compute attention coefficients using an edge-wise mechanism, rather than a node-wise mechanism followed by an edge-wise distance metric. This is suitable for a graph setting (with neighbourhoods specified by the graph structure), because we can only evaluate this mechanism across the edges that are in the graph (easing the computational load). In a multi-agent setting (as the one explored by your paper), there may not be an immediately-obvious such structure, and this is why one has to resort to specifying interactions across all pairs of agents (at least initially, before the kind of pruning by way of k-NN could be performed). As we focus on making per-node predictions in graphs, we also found it useful for a node to attend over its own features, which your proposed model explicitly disallows. Our work also features a few stabilising additions to the attention model (to better cope with the smaller training set sizes), such as multi-head attention and dropout on the computed attention coefficients.", "Very nicely presented work.\n\nI was wondering how much influence the ELU activation function had on your results? It looks like all baseline models make use of ReLU for easier comparison. \n\nIn terms of datasets: did you use the same splits for Cora and Citeseer as in previous work (e.g. Kipf&Welling, ICLR2017), or did you merely use the same size of split and resample? In my experience, the choice of train/val/test splits can have a very significant impact on test performance (it is possible to get up to 84% accuracy on Cora using a lucky train/val/test split with earlier models as well).\n\nAs mentioned by others, you might want to refer to earlier work on attention mechanisms for graph neural networks or using multiple basis functions (\"attention heads\"), such as in the MoNet paper. Here are some references:\n\nhttps://arxiv.org/abs/1611.08402 - MoNets: looks like your model is a special case of theirs, they also compare on the same kinds of tasks but avoid scalability issues by not having the softmax attention formalism\nhttps://arxiv.org/abs/1703.07326 - Introduces \"Neighborhood attention\"\nhttps://arxiv.org/abs/1706.06383 - Improved version of \"Neighborhood attention\"\nhttps://arxiv.org/abs/1706.06122 - Attention mechanism in a graph neural net model for multi-agent reinforcement learning", "Thank you very much for your comment - we acknowledge that this detail about our experimental setup was not sufficiently clear in the submitted version and are more than happy to address it appropriately in a subsequent revision.\n\nWe have picked the best hyperparameter configuration considering the validation score on both Cora and PPI, and then reused the Cora architectural hyperparameters on Citeseer. Once the hyperparameters were in place, the early-stopped models were then evaluated on the test set once, and the obtained results are the ones reported in the paper.\n\nWe agree that reporting the averaged model performance would be useful, and we will do this in an updated version of the paper. The results after 10 runs of the same model with different random seeds are (with highlighted standard deviations):\n\nCora: 83.0 +- 0.6 (with a maximum of 83.9%)\nCiteseer: 72.7 +- 0.7 (with a maximum of 74.2%)\nPPI: 0.952 +- 0.006 (with a maximum of 0.966) \n\nThese correspond to state-of-the-art results across all three datasets.", "The model you propose looks very similar to mixture model networks (MoNet):\n\nhttp://arxiv.org/pdf/1611.0840.pdf (appeared as oral at CVPR 2017)\n\nwhich you did not cite. \n\nMoNet model performed better than GCN and Chebyshev net (both of which can be considered as a particular instance thereof). What is the difference/similarity of your approach compared to MoNet? ", "Interesting work! \n\nI've done some related work, that will be presented at NIPS: https://arxiv.org/abs/1706.06122\nI wonder how the two works compare?", "In the main results in the accuracies in Table 2 and F1 scores on Table 3, are those numbers averaged over multiple training instances of the model with random initializations or are they the numbers corresponding to the best performing model? In the former case, how many random instances is it averaged over? " ]
[ 6, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJXMpikCZ", "iclr_2018_rJXMpikCZ", "iclr_2018_rJXMpikCZ", "iclr_2018_rJXMpikCZ", "S1vzCb-bz", "ryFW0bhlM", "BJth1UKlf", "Hyresurlz", "iclr_2018_rJXMpikCZ", "Sy2YY-MlM", "iclr_2018_rJXMpikCZ", "HyL2UVDJG", "iclr_2018_rJXMpikCZ", "HJk72Fekz", "r1KPYM0Cb", "ryGdFlCCb", "iclr_2018_rJXMpikCZ", "B16obCO0W", "iclr_2018_rJXMpikCZ", "iclr_2018_rJXMpikCZ", "iclr_2018_rJXMpikCZ" ]
iclr_2018_BywyFQlAW
Minimax Curriculum Learning: Machine Teaching with Desirable Difficulties and Scheduled Diversity
We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous-discrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models while achieving the same performance. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee.
accepted-poster-papers
The submission formulates self paced learning as a specific iterative mini-max optimization, which incorporates both a risk minimization step and a submodular maximization for selecting the next training examples. The strengths of the paper lie primarily in the theoretical analysis, while the experiments are somewhat limited to simple datasets: News20, MNIST, & CIFAR10. Additionally, the main paper is probably too long in its current form, and could benefit from some of the proof details being moved to the appendix.
train
[ "BJcnVd6mG", "BkbPVPzgG", "H1-u-QCef", "HkO3F9EbM", "BkJNS_6Xf", "H1xfB_TXM", "SyPKEdpQM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for your positive comments about the theoretical analysis and helpful suggestions to extend Theorem 1! In the new revision, we've added a 4.5-page analysis. This does not only complete the analysis of Theorem 1, but also shows the convergence speed for both the outer-loop and the whole algorithm, and show bounds as functions of hyperparameters. The results support our scheduling strategy for $\\lambda$ and $k$. A summary of the newly added analysis can be found in our new uploaded comments above your review comments.\n", "Overview:\nThis paper proposes an approach to curriculum learning, where subsets of examples to train on are chosen during the training process. The proposed method is based on a submodular set function over the examples, which is intended to capture diversity of the included examples and is added to the training objective (eq. 2). The set is optimized to be as hard as possible (maximize loss), which results in a min-max problem. This is in turn optimized (approximately) by alternating between gradient-based loss minimization and submodular maximization. The theoretical analysis shows that if the loss is strongly convex, then the algorithm returns a solution which is close to the optimal solution. Empirical results are presented for several benchmarks.\nThe paper is mostly clear and the idea seems nice. On the downside, there are some limitations to the theoretical analysis and optimization scheme (see comments below).\n\nComments:\n- The theoretical result (thm. 1) studies the case of full optimization, which is different than the proposed algorithm (running a fixed number of weight updates). It would be interesting to show results on sensitivity to the number of updates (p).\n- The algorithm requires tuning of quite a few hyperparameters (sec. 3).\n- Approximating a cluster with a single sample (sec. 2.3) seems rather crude. There should be some theoretical and/or empirical study of its effect on quality of the solution.\n\nMinor/typos:\n- what is G(j|G\\j) in eq. (9)?\n- why cite Anonymous (2018) instead of Appendix...?\n- define V in Thm. 1.\n- in eq. (4) it may be clearer to denote g_k(w). Likewise in eq. (6) \\hat{g}_\\hat{A}(w), and in eq. (14) \\tilde{g}_{\\cal{A}}(w).\n- figures readability can be improved.", "This paper introduces MiniMax Curriculum learning, as an approach for adaptively train models by providing it different subsets of data. The authors formulate the learning problem as a minimax problem which tries to choose diverse example and \"hard\" examples, where the diversity is captured via a Submodular Loss function and the hardness is captured via the Loss function. The authors formulate the problem as an iterative technique which involves solving a minimax objective at every iteration. The authors argue the convergence results on the minimax objective subproblem, but do not seem to give results on the general problem. The ideas for this paper are built on existing work in Curriculum learning, which attempts to provide the learner easy examples followed by harder examples later on. The belief is that this learning style mimics human learners.\n\nPros:\n- The analysis of the minimax objective is novel and the proof technique introduces several interesting ideas.\n- This is a very interesting application of joint convex and submodular optimization, and uses properties of both to show the final convergence results\n- Even through the submodular objective is only approximately solvable, it still translates into a convergence result\n- The experimental results seem to be complete for the most part. They argue how the submodular optimization does not really affect the performance and diversity seems to empirically bring improvement on the datasets tried.\n\nCons:\n- The main algorithm MCL is only a hueristic. Though the MiniMax subproblem can converge, the authors use this in somewhat of a hueristic manner.\n- It seems somewhat hand wavy in the way the authors describe the hyper parameters of MCL, and it seems unclear when the algorithm converge and how to increase/decrease it over iterations\n- The objective function also seems somewhat non-intuitive. Though the experimental results seem to indicate that the idea works, I think the paper does not motivate the loss function and the algorithm well.\n- It seems to me the authors have experimented with smaller datasets (CIFAR, MNIST, 20NewsGroups). This being mainly an empirical paper, I would have expected results on a few larger datasets (e.g. ImageNet, CelebFaces etc.), particularly to see if the idea also scales to these more real world larger datasets.\n\nOverall, I would like to see if the paper could have been stronger empirically. Nevertheless, I do think there are some interesting ideas theoretically and algorithmically. For this reason, I vote for a borderline accept. ", "The main strength of this paper, I think, is the theoretical result in Theorem 1. This result is quite nice. I wish the authors actually concluded with the following minor improvement to the proof that actually strengthens the result further.\n\nThe authors ended the discussion on thm 1 on page 7 (just above Sec 2.3) by saying what is sufficiently close to w*. If one goes back to (10), it is easy to see that what converges to w* when one of three things happen (assuming beta is fixed once loss L is selected).\n\n1) k goes to infinity\n2) alpha goes to 1\n3) g(w*) goes to 0\n\nThe authors discussed how alpha is close to 1 by virtue of submodular optimization lower bounds there for what is close to w*. In fact this proof shows the situation is much better than that. \n\nIf we are really concerned about making what converge to w*, and if we are willing to tolerate the increasing computational complexity associated solving submodular problems with larger k, we can schedule k to increase over time which guarantees that both alpha goes to 1 and g(w*) goes to zero. \n\nThere is also a remark that G(A) tends to be modular when lambda is small which is useful.\nFrom the algorithm, it seems clear that the authors recognized these two useful aspects of the objective and scheduled lambda to decrease exponentially and k to increase linearly.\n\nIt would be really nice to complete the analysis of Thm1 with a formal analysis of convergence speed for ||what-w*|| as lambda and k are scheduled in this fashion. Such an analysis would help practitioners make better choices for the hyper parameters gamma and Delta.", "In the new revision, we've added a 4.5-page analysis to show the convergence speed of both outer-loop and the whole algorithm. A summary of the newly added analysis can be found in our new uploaded comments.\n\nReply to Comments:\n\nTheorem 3 analyzes the convergence rate of the whole algorithm presented in Algorithm 1 with a fixed number of weight updates $p$ in each inner-loop. The first term in the bound exponentially decreases with power $p$. \n\nThe convergence bounds in both Theorem 2 and Theorem 3 are functions of all hyperparameters $\\lambda$, $\\Delta$ and $p$. They show that exponentially decreasing $\\lambda$ is sufficient to guarantee a linear rate of convergence, while choosing small $\\Delta$ and $p$ make the algorithm efficient in computation. These theoretical analysis allows us to tune the hyperparameters in relatively small ranges.\n\nInstead of representing the whole cluster by the centroid everywhere, we only represent the hardness of a cluster by the loss on its centroid. By setting the number of clusters to be a large value, e.g., 1000 clusters for 50000 samples in our experiments, this hardness representation is accurate enough. It not only saves computation spent on submodular maximization in practice, but also makes the algorithm more robust to outliers, because it avoids selecting a single (or a few number of) outliers with extremely large loss.\n\nReply to Minor/typos:\n\nG(j|G\\j) contains a typo, it should be G(j|V\\j)=G(V)-G(V\\j), the marginal gain of element j conditioned on all the other elements in ground set V except j. Thanks for pointing this out! \n\nIn the revision, 1) we changed all citations to Anonymous (2018) to specific sections in Appendix; 2) we define V in Theorem 1 and all other Theorems; 3) for simplicity of representation, we use g() without subscript when it causes no confusion. For example, Theorem 1 and Lemma 2 holds for any iteration in outer-loop, so we ignore the subscript of g(). When discussing relationship between different iterations of outer-loop, we add subscript to w in g(w) (e.g., in proof of Theorem 2) or add subscipt to g() (e.g., in proof of Theorem 3). ", "In the new revision, we add 4.5-page analysis to show the convergence speed for both the outer-loop and the whole algorithm. A summary of the newly added analysis can be found in our new uploaded comments.\n\nReply to Cons:\n\nTheorem 3 in the new revision gives the convergence analysis for the whole algorithm, each of whose inner-loop uses fixed number of updates to approximately solve a minimax problem. It does not only show convergence, but also shows convergence rate for both the inner-loop and outer-loop. \n\nIn Theorem 2 and Theorem 3, we show convergence bounds as functions of all hyperparameters. These results give strong intuition for how to choose the hyperparameters. They show that exponentially decreasing $\\lambda$ is sufficient to guarantee a linear rate of convergence, while choosing small $\\Delta$ and $p$ make the algorithm efficient computationally. In practice, we use grid search with small ranges to achieve the hyperparameters used in experiments.\n\nThe intuitions behind the objetive function can be found in the two paragraphs above Section 1.1, the last two paragraphs of Section 1.1, and the first paragraph of Section 2. In these places, we provide evidence based on the nature of machine learning model/algorithms, the similarity to the human teaching/learning process, and the comparison to previous works. In addition, the objective function has nice theoretical properties. Our newly added theoretical analysis supports that decreasing diversity weight $\\lambda$ and increasing hardness $k$ can improve the convergence bound. This provides further theoretical support.\n\nOur experiments verify several advantages of the proposed minimax curriculum learning across three different models and datasets. Our basic goal is to prove the idea of decreasing diversity and increasing hardness for general machine learning problems. This idea has never been studied before, either theoretically or empirically, as far as we know. We are working on experiments for much larger datasets such as ImageNet and COCO, and will make the results available as soon as we can.", "We note that both Reviewer2 and Reviewer3 wish to see an analysis of the whole algorithm, and more details on hyperparameter tuning issues. Reviewer1 also provides helpful suggestions on how to strengthen Theorem 1's result. In fact, more complete theoretical analysis is the main concern of all reviewers. In the new revision, we've added a 4.5-page mathematical analysis giving a convergence rate of the whole algorithm with the scheduling of $k$ and $\\lambda$. The result also shows how to set hyperparameters to change the convergence. Here is a summary.\n\n1) Theorem 2 shows that either decreasing $\\lambda$ exponentially or increasing $k$ exponentially results in a linear convergence rate for the outer-loop of our algorithm. It also shows that using a scheduling with decreasing $\\lambda$ or/and increasing $k$ can gradually improve the bound. This supports our intuition of decreasing diversity and increasing hardness.\n\n2) Theorem 3 gives the convergence rate of the whole algorithm (each inner-loop runs only $p$ iterations). It shows linear convergence rate for both the inner-loop and outer-loop. The bound has two terms, one decreases exponentially with power $p$ (#iterations for inner-loop) and the other decreases exponentially with power $T$ (#iterations for outer-loop). \n\n3) Convergence bounds in both Theorem 2 and Theorem 3 contains all the hyperparameters $\\gamma$, $\\Delta$ and $p$. They show how the bounds change with these hyperparameters, and can help to choose hyperparameters in practice. For example, they show that exponentially decreasing $\\lambda$ is sufficient to guarantee a linear rate of convergence, while choosing small $\\Delta$ (the additive $k$ increment) and $p$ make the algorithm efficient in computation.\n\n4) Potentially interesting to future analysis of more general continuous-combinatorial optimization: The constant factors in Theorem 2 implies that $\\kappa_F/\\beta$ (ratio between the curvature of submodular term and the strongly-convex constant of loss term) and $c_1$ (the minimal ratio between loss and singular gain over all samples) are two important quantities in analyzing convex-submodular hybrid optimization. The constant factor $c$ in Theorem 3 is a weighted sum of the optimal objective value of the minimax problem without the submodular term, and the maximal value for the submodular term only. It relates the convergence bound to the solutions of the two extreme cases of Eq.(2)." ]
[ -1, 5, 6, 6, -1, -1, -1 ]
[ -1, 3, 4, 3, -1, -1, -1 ]
[ "HkO3F9EbM", "iclr_2018_BywyFQlAW", "iclr_2018_BywyFQlAW", "iclr_2018_BywyFQlAW", "BkbPVPzgG", "H1-u-QCef", "iclr_2018_BywyFQlAW" ]
iclr_2018_B1n8LexRZ
Generalizing Hamiltonian Monte Carlo with Neural Networks
We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code will be open-sourced with the camera-ready paper.
accepted-poster-papers
This paper presents a learned inference architecture which generalizes HMC. It defines a parameterized family of MCMC transition operators which share the volume preserving structure of HMC updates, which allows the acceptance ratio to be computed efficiently. Experiments show that the learned operators are able to mix significantly faster on some simple toy examples, and evidence is presented that it can improve posterior inference for a deep latent variable model. This paper has not quite demonstrated usefulness of the method, but it is still a good proof of concept for adaptive extensions of HMC.
val
[ "ryUPYorHz", "B1_so3fSM", "SJGzn0pNz", "ryeffj94z", "rJ0a6k9Nf", "Hksh6uugz", "HJdCshKgf", "rkZzfMqef", "By4O2iZVM", "r1-mLY6XG", "BkkI4roQM", "HyMf9EiXz", "ryQkGp_XG", "Bylo8owGM", "Syp2SsvzG", "rkeGIjvMf" ]
[ "public", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "author", "author", "author" ]
[ "Yes, that is an interesting point that ReLU networks are routinely used together with stochastic gradient VI. My concern would then apply to these methods as well, even though the discontinuity in the MH method is inherent to the problem where for NNs it can be resolved by replacing ReLUs with a continuously differentiable alternative.\n\nIt is true that sub-gradients exists, but I believe the interaction between sub-gradients and integration requires some careful consideration.\n\nI think the paper is good work and my comments are mostly me being curious if you had considered this problem (and whether it is a problem) and had any ideas how to resolve it. Thanks for the discussion!", "While it is true that for some value of \\xi, the integrand is not differentiable, it does admit a sub-gradient everywhere. This is sufficient for optimization ( https://en.wikipedia.org/wiki/Subgradient_method ). Also note that ReLU networks demonstrate this same characteristic (continuous function, with discontinuities in the first derivative), but are routinely trained in deep learning.\n\nThank you again for your interest and thoughtful reading of our work!", "I think the key issue here is establishing whether the integrand in Eq. (8) is an absolutely continuous function of \\theta for almost all \\xi. Then you can use e.g. Theorem 3 here http://planetmath.org/differentiationundertheintegralsign to validate the interchange. The easier to validate Theorem 2, which is sufficient for most cases in stochastic gradient-based VI, does not hold for your integrand because assumption 2 is not valid for the function A(|). This because the derivative of A(|) does not exist whenever the ratio of densities is exactly one in Eq. (3). But perhaps it is easy to show the absolute continuous property of A(|) wrt \\theta for almost all \\xi?\n\nI do agree that this is not an issue of any discrete random variables nor the function described by the numerical integration of an Hamiltonian flow. My concern is merely if the discontinuity in the gradient of A(|) wrt \\theta will be an issue.\n\nAlso, thanks for a very interesting read!", "Thank you for your question, we believe our derivations are correct and the gradients unbiased.\n\nIn Eq. (8), p(\\xi) and q(\\xi) are not functions of the parameters \\theta and the loss inside the expectation is an (almost everywhere) differentiable function of \\xi. This allows us, when differentiating w.r.t \\theta to easily exchange (under mild assumptions) derivative and integration.\n\nIt is important to note that when optimizing, we are NOT sampling through the accept/reject step. Given a state \\xi, we move it forward using our (differentiable) generalized Hamiltonian dynamics and use that new proposed state for the loss; explicitly marginalizing over the accept/reject decision. We thus do not need to back-propagate through a discrete decision variable, making our gradients unbiased. This is additionally detailed in Pasarica and Gelman 2010.\n\nWe hope this answers your question!", "The accept-reject step of the MCMC kernel introduces a discontinuity in the function A( ) that depends on both the random variable xi AND the parameter being optimized with respect to. This means that interchanging the order of expectation (integration) and differentiation in eq. (8) is invalid in general. Have the authors considered this in their derivations? Can you prove that the gradients used for learning in Alg. 1 are still unbiased, and thus will lead to convergence by standard stochastic approximation results? If the gradients are not unbiased (which I suspect is the case), have you studied the impact this has on the learning procedure?", "In this work, the authors propose a procedure for tuning the parameters of an HMC algorithm (I guess, if I have understood correctly).\n\nI think this paper has a good and strong point: this work points out the difficulties in choosing properly the parameters in a HMC method (such as the step and the number of iterations in the leapfrog integrator, for instance). In the literature, specially in machine learning, there is ``fever’’ about HMC, in my opinion, partially unjustified.\n\nIf I have understood, your method is an adaptive HMC algorithm where the parameters are updated online; or is the training done in advance? Please, remark and clarify this point.\n\nHowever, I have other additional comments:\n\n- Eqs. (4) and (5) are quite complicated; I think a running toy example can help the interested reader.\n\n- I suggest to compare the proposed method to other efficient methods that do not use the gradient information (in some cases as multimodal posteriors, the use of the gradient information can be counter-productive for sampling purposes), such as Multiple Try Metropolis (MTM) schemes\n\nL. Martino, J. Read, On the flexibility of the design of Multiple Try Metropolis schemes, Computational Statistics, Volume 28, Issue 6, Pages: 2797-2823, 2013, \n\nadaptive techniques, \n\nH. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):223–242, April 2001,\n\nand component-wise strategies as Gibbs Sampling, \n\nW. R. Gilks and P. Wild, Adaptive rejection sampling for Gibbs sampling, Appl. Statist., vol. 41, no. 2, pp. 337–348, 199.
\n\nAt least, add a brief paragraph in the introduction citing and discussing this possible alternatives.", "The paper proposed a generalized HMC by modifying the leapfrog integrator using neural networks to make the sampler to converge and mix quickly. Mixing is one of the most challenge problems for a MCMC sampler, particularly when there are many modes in a distribution. The derivations look correct to me. In the experiments, the proposed algorithm was compared to other methods, e.g., A-NICE-MC and HMC. It showed that the proposed method could mix between the modes in the posterior. Although the method could mix well when applied to those particular experiments, it lacks theoretical justifications why the method could mix well. ", "The paper introduces a non-volume-preserving generalization of HMC whose transitions are determined by a set of neural network functions. These functions are trained to maximize expected squared jump distance.\nThis works because each variable (of the state space) is modified in turn, so that the resulting update is invertible, with a tractable transformation inspired by Dinh et al 2016.\n\nOverall, I believe this paper is of good quality, clearly and carefully written, and potentially accelerates mixing in a state-of-the-art MCMC method, HMC, in many practical cases. A few downsides are commented on below.\n\nThe experimental section proves the usefulness of the method on a range of relevant test cases; in addition, an application to a latent variable model is provided sec5.2. \nFig 1a presents results in terms of numbers of gradient evaluations, but I couldn't find much in the way of computational cost of L2HMC in the paper. I can't see where the number \"124x\" in sec 5.1 stems from. As a user, I would be interested in the typical computational cost of both \"MCMC sampler training\" and MCMC sampler usage (inference?), compared to competing methods. This is admittedly hard to quantify objectively, but just an order of magnitude would be helpful for orientation. \nWould it be relevant, in sec5.1, to compare to other methods than just HMC, eg LAHMC?\n\nI am missing an intuition for several things: eq7, the time encoding defined in Appendix C\n\nAppendix Fig5, I cannot quite see how the caption claim is supported by the figure (just hardly for VAE, but not for HMC).\n\nThe number \"124x ESS\" in sec5.1 seems at odds with the number in the abstract, \"50x\".\n\n# Minor errors\n- sec1: \"The sampler is trained to minimize a variation\": should be maximize\n\"as well as on a the real-world\"\n- sec3.2 \"and 1/2 v^T v the kinetic\": \"energy\" missing\n- sec4: the acronym L2HMC is not expanded anywhere in the paper\nThe sentence \"We will denote the complete augmented...p(d)\" might be moved to after \"from a uniform distribution\" in the same paragraph. \nIn paragraph starting \"We now update x\":\n - specify for clarity: \"the first update, which yields x' \"/ \"the second update, which yields x'' \"\n - \"only affects $x_{\\bar{m}^t}$\": should be $x'_{\\bar{m}^t}$ (prime missing)\n - the syntax using subscript m^t is confusing to read; wouldn't it be clearer to write this as a function, eg \"mask(x',m^t)\"?\n - inside zeta_2 and zeta_3, do you not mean $m^t\" and $\\bar{m}^t$ ?\n- sec5: add reference for first mention of \"A NICE MC\"\n- Appendix A: \n - \"Let's\" -> \"Let\"\n - eq12 should be x''=...\n- Appendix C: space missing after \"Section 5.1\"\n- Appendix D1: \"In this section is presented\" : sounds odd\n- Appendix D3: presumably this should consist of the figure 5 ? Maybe specify.", "We believe we are the most flexible parameterization of a Markov kernel to date. However, there has been previous work that proposes general purpose kernels. Most relevant is Song et al, which trains a flexible class of volume-constrained Markov proposals using an adversarial objective. (We discuss and experimentally compare against this approach in our paper.) \n\nThanks for the question!", "Thanks for the response! As I understand it then, your method is the first in literature to be able to train expressive MCMC kernels? (as if I recall correctly, in the past, the focus has been more on tuning a very limited number of parameters associated with the proposal distribution, like the variance of a gaussian proposal for ex.)", "We thank the reviewers for their valuable time and comments.\n\nWe updated the paper with the following modifications:\n- Clarified some points and fixed typos pointed out by the reviewers.\n- Added a ``Future Work section as well as additional relevant references.\n- Added a comparison with Look-Ahead HMC (LAHMC; Sohl-Dickstein et al. 2014) in the Appendix.\n\nAdditionally, in the process of revisiting our experiments to compare against LAHMC, we empirically found that weighting the second term of our loss (the ‘burn-in’ term) could lead to even more improved auto-correlation and ESS on the diagnostic distributions. We therefore updated the paper and report the results obtained with slightly tuning that parameter (setting it to 0 or 1).", "Thank you for your question.\n\nYou are correct that in general our method’s proposals cannot be interpreted as (approximately) integrating the dynamics of any Hamiltonian. Ultimately our goal (and that of HMC) is to produce a proposal that mixes efficiently, not to simulate Hamiltonian dynamics accurately.\n\nThere are many other trainable proposals for which we could compute the Jacobian, but not all will mix efficiently. By choosing a parameterized family of proposals that can mimic the behavior of HMC (and initializing it to do so), we ensure that our learned proposal performs at least as well as HMC.\n\nThe momentum-resampling step is essential, since it is the only source of randomness in the proposal. Using gradient information (d_x U(x)) is essential for giving the proposal information about the local geometry of the target distribution.", "I had one question -- in Equations 4-6, you have functions Q, T to rescale and translate the momentum and position. However it seems that Q, T are vectors and thus you are learning arbitrary transformations to d_x U(x)? \n\nIf that is the case, then I'm unclear on how your leapfrog operator guarantees (approximate) integration of the Hamiltonian. And if it does not and your goal is simply to learn proposals for which you can compute the Jacobian, then what's the purpose of the momentum resampling step and/or having the d_x U(x) term in the update at all?\n\nIf you could shed some light on that, that would be great!", "We first and foremost want to thank you for your time and extremely valuable comments. We have uploaded a new version of the paper based on the feedback, and have addressed specific points below.\n\nClarification about 50x vs 124x:\nWe decided against advertising the 124x number as it is misleading considering that HMC completely failed on this task; the correct ratio was too large for us to experimentally measure. As such, we reported the one for the Strongly-Correlated Gaussian. We clarified this in the text and detail that L2HMC can succeed when HMC fails.\n\nIntuition on Eq 7.:\nWe define this reciprocal loss to encourage mixing across the entire state space. The second term corresponds exactly to Expected Square Jump Distance, which we want to maximize as a proxy for mixing. The first term discourages a particle from not-moving at all in a region of state space -- if d(x, x’) = 0, the first term would be infinite. We clarified that part in the text.\n\nTime encoding:\nOur operator L_\\theta consists of the composition of M augmented leapfrog steps. For each of those leapfrog, the timestep t is provided as input to the networks Q, S and T. Instead of providing it as a single scalar value, we provide it as a 2-d vector [cos(2 * pi * t / M), sin(2 * pi * t / M)].\n\nRegarding samples in Fig5:\nSample quality and sharpness are inherently hard things to evaluate. Our observation was that many digits generated by L2HMC-DGLM look very sharp (Line 1 Column 2, Line 2 Column 8, Line 5 Column 2, Line 7 Columns 3 and 7…). However, we will weaken the claim in the caption.\n\nComparison with LAHMC:\nWe compared our method to LAHMC on the evaluated energy functions. L2HMC significantly outperforms LAHMC on all tasks, for the same number of gradient evaluations. LAHMC is also unable to mix between modes in the MoG case. Results are reported in Appendix C.1. \n\nWe also note that L2HMC could be easily combined with LAHMC, by replacing the leapfrog integrator of LAHMC with the learned one of L2HMC.\n\nIn the process of revisiting our experiments to compare against LAHMC, we empirically found that weighting the second term of our loss (the ‘burn-in’ term) could lead to even more improved auto-correlation and ESS on the diagnostic distributions. We therefore updated the paper and report the results obtained with slightly tuning that parameter (setting it to 0 or 1).\n\nQuestion about computation:\nFor the 2d-SCG case, on CPU, the training of the sampler took 160 seconds. The L2HMC overhead for sampling, with a batch-size of 200, was about 36%. This is negligible compared to an 106x improved ESS. We also should note that for the latent generative model case, we train the sampler online with the same computations used to train everything else; in that case L2HMC and HMC perform the exact same number of gradient evaluation of the energy and thus requires no training budget.\n\nThank you once again for your valuable feedback, we hope this helps answer your questions!", "Thank you very much for your review and comments. Guaranteeing mixing between modes is a fundamental (#P-Hard) problem. As such, we do not hope to solve it in the general case. Rather, we propose a method to greatly increase the flexibility and adaptability of a class of samplers which is already state of the art in many contexts. The relation between mixing time and expected square jump distance is thoroughly treated in [Pasarica & Gelman, 2010], and is the theoretical inspiration for our choice of training loss.\n\nWe further emphasize that, barring optimization issues, our method should always fare at least as well as HMC in terms of mixing.\n\nThank you once again, we have updated the text to more clearly discuss why our approach might be expected to lead to better mixing.\n\nAdditionally, in the process of revisiting our experiments to compare against LAHMC, we empirically found that weighting the second term of our loss (the ‘burn-in’ term) could lead to even more improved auto-correlation and ESS on the diagnostic distributions. We therefore updated the paper and report the results obtained with slightly tuning that parameter (setting it to 0 or 1).", "Thank you for your review and the pointer to references.\n\nWe wish to emphasize that our method is able, but not limited to, automatically tuning HMC parameters (which systems like Stan already have well-tested heuristics for). Our approach generalizes HMC, and is capable of learning proposal distributions that do not correspond to any tuned HMC proposal (but which can still be plugged into the Metropolis-Hastings algorithm to generate a valid MCMC algorithm). Indeed, in our experiments, we find that our approach significantly outperforms well-tuned HMC kernels.\n\nThe training is done during the burn-in phase, and the trained sampler is then frozen. This is a common approach to adapting transition-kernel hyperparameters in the MCMC literature. \n\nRegarding the references, we added those in the text. We also want to emphasize that all of these are complementary to and could be combined with our method. For example, we could incorporate the intuition behind MTM by having several parametric operators and training each one when used. \n\nAdditionally, in the process of revisiting our experiments to compare against LAHMC, we empirically found that weighting the second term of our loss (the ‘burn-in’ term) could lead to even more improved auto-correlation and ESS on the diagnostic distributions. We therefore updated the paper and report the results obtained with slightly tuning that parameter (setting it to 0 or 1)." ]
[ -1, -1, -1, -1, -1, 7, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "B1_so3fSM", "SJGzn0pNz", "ryeffj94z", "rJ0a6k9Nf", "iclr_2018_B1n8LexRZ", "iclr_2018_B1n8LexRZ", "iclr_2018_B1n8LexRZ", "iclr_2018_B1n8LexRZ", "r1-mLY6XG", "HyMf9EiXz", "iclr_2018_B1n8LexRZ", "ryQkGp_XG", "iclr_2018_B1n8LexRZ", "rkZzfMqef", "HJdCshKgf", "Hksh6uugz" ]
iclr_2018_H1Yp-j1Cb
An Online Learning Approach to Generative Adversarial Networks
We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training.
accepted-poster-papers
This paper presents a GAN training algorithm motivated by online learning. The method is shown to converge to a mixed Nash equilibrium in the case of a shallow discriminator. In the initial version of the paper, reviewers had concerns about weak baselines in the experiments, but the updated version includes comparisons against a variety of modern GAN architectures which have been claimed to fix mode dropping. This seems to address the main criticism of the reviewers. Overall, this paper seems like a worthwhile addition to the GAN literature.
train
[ "HJ66g8RgM", "HkQupw5gf", "Bko7pxAWz", "H1ldL6q7f", "B1TYdDcXG", "BJgv_w9QG", "Bylluw97M", "S19bDv9mG", "Sy6RLv9Qf", "BkKnrjN-z", "rycH3LVbz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author", "public", "public" ]
[ "It is well known that the original GAN (Goodfellow et al.) suffers from instability and mode collapsing. Indeed, existing work has pointed out that the standard GAN training process may not converge if we insist on obtaining pure strategies (for the minmax game). The present paper proposes to obtain mixed strategy through an online learning approach. Online learning (no regret) algorithms have been used in finding an equilibrium for zero sum game. However, most theoretical convergence results are known for convex-concave loss. One interesting theoretical contribution of the paper is to show that convergence result can be proved if one player is a shallow network (and concave in M).In particular, the concave player plays the FTRL algorithm with standard L2 regularization term. The regret of concave player can be bounded using existing result for FTRL. The regret for the other player is more interesting: it uses the fact the adversary's strategy doesn't change too drastically. Then a lemma by Kalai and Vempala can be used. The theory part of the paper is reasonable and quite well written. \n\nBased on the theory developed, the paper presents a practical algorithm. Compared to the standard GAN training, the new algorithm returns mixed strategy and examine several previous models (instead of the latest) in each iteration. The paper claims that this may help to prevent model collapsing.\n\nHowever, the experimental part is less satisfying. From figure 2, I don't see much advantage of Checkhov GAN. In other experiments, I don't see much improvement neither (CIFAR10 and CELEBA).The paper didn't really compare other popular GAN models, especially WGAN and its improved version, which is already quite popular by now and should be compared with.\n\nOverall, I think it is a borderline paper.\n\n-------------------------\nI read the response and the new experimental results regarding WGAN.\nThe experimental results make more sense now.\nIt would be interesting to see whether the idea can be applied to more recent GAN models and still perform better.\nI raised my score to 7.\n\n", "This is an interesting paper, exploring GAN dynamics using ideas from online learning, in particular the pioneering \"sparring\" follow-the-regularized leader analysis of Freund and Schapire (using what is listed here as Lemma 4). By restricting the discriminator to be a single layer, the maximum player plays over a concave (parameter) space which stabilizes the full sequence of losses so that Lemma 3 can be proved, allowing proof of the dynamics' convergence to a Nash equilibrium. The analysis suggests a practical (heuristic) algorithm incorporating two features which emerge from the theory: L2 regularization and keeping a history of past models. A very simple queue for the latter is shown to do quite competitively in practice.\n\nThis paper merits acceptance on theoretical merits alone, because the FTRL analysis for convex-concave games is a very robust tool from theory (see also the more recent sequel [Syrgkanis et al. 2016 \"Fast convergence of regularized learning in games\"]) that is natural to employ to gain insight on the much more brittle GAN case. The practical aspects are also interesting, because the incorporation of added randomness into the mixed generation strategy is an area where theoretical justifications do motivate practical performance gains; these ideas could clearly be developed in future work.", "The paper applies tools from online learning to GANs. In the case of a shallow discriminator, the authors proved some results on the convergence of their proposed algorithm (an adaptation of FTRL) in GAN games, by leveraging the fact that when D update is small, the problem setup meets the ideal conditions for no-regret algorithms. The paper then takes the intuition from the semi-shallow case and propose a heuristic training procedure for deep GAN game. \n\nOverall the paper is very well written. The theory is significant to the GAN literature, probably less so to the online learning community. In practice, with deep D, trained by single gradient update steps for G and D, instead of the \"argmin\" in Algo 1., the assumptions of the theory break. This is OK as long as sufficient experiment results verify that the intuitions suggested by the theory still qualitatively hold true. However, this is where I have issues with the work:\n\n1) In all quantitative results, Chekhov GAN do not significantly beat unrolled GAN. Unrolled GAN looks at historical D's through unrolled optimization, but not the history of G. So this lack of significant difference in results raise the question of whether any improvement of Chekhov GAN is coming from the online learning perspective for D and G, or simply due to the fact that it considers historical D models (which could be motivated by sth other than the online learning theory).\n\n2) The mixture GAN approach suggested in Arora et al. (2017) is very related to this work, as acknowledged in Sec. 2.1, but no in-depth analysis is carried out. I suggest the authors to either discuss why Chekhov GAN is obviously superior and hence no experiments are needed, or compare them experimentally. \n\n3) In the current state, it is hard to place the quantitative results in context with other common methods in the recent literature such as WGAN with gradient penalty. I suggest the authors to either report some results in terms of inception scores on cifar10 with similar architectures used in other methods for comparison. Alternatively please show WGAN-GP and/or other method results in at least one or two experiments using the evaluation methods in the paper. \n\nIn summary, almost all the experiments in the paper are trying to establish improvement over basic GAN, which would be OK if the gap between theory and practice is small. But in this case, it is not. So it is not entirely convincing that the practical Algo 2 works better for the reason suggested by the theory, nor it drastically improves practical results that it could become the standard technique in the literature. ", " Dear author,\n Thanks a lot for your explanation. I understand that FTRL approach works in theory. Actually my toy example was meant to point out that the practical Chekhov GAN may not work because it is using gradients, instead of best-response action. Overall, I think your paper is very good, especially that many experiments have been done to demonstrate the performance. I would say the application of FTRL to GAN training itself is already a very good idea and is worth publishing. We came up a similar idea as yours, so that is probably why I think highly of your work. We actually prove that our scheme can converge to the optimal point without assuming semi-shallow architecture (under the same other conditions of your theorem). \n\nI would also point out that the gradient update and best-response action will result in completely different training dynamics. The best-response action would oscillate and averaging over the past is needed to make sure it converges. Actually the gradient update can guarantee the convergence of training as well without an extreme oscillation between the actions. Details can be found in \"Gradient descent GAN optimization is locally stable\". That is why I raised the next question of \"gap between theory and practice\".\n\nBTW, we also found a recent paper that applies gradient descents but manages to avoid mode collapse in the above-mentioned toy example as well. Actually I read many GAN papers that aim to resolve the mode collapse issue, but obviously does not work even for this simple example. I agree with the authors that the technique of injection of noise is possible to resolve this issue. In this sense, it is hard to distinguish whether the proposed method contributes to the good performance or the noise injection technique. Probably a more systematic approach to evaluate the GAN performance is needed. Thanks!", "Dear Reviewer,\n\nThank you for your detailed and supportive review!\n", "Dear Reviewer,\n\nWe thank you for your feedback and for your positive review regarding the theoretical part of the paper. In the following we answer your concerns and requests. \n\nYour review indicates that you believe the experimental part does not demonstrate the benefits of our approach. We respectfully disagree and we kindly ask you to look again at the experiments. Moreover, 1) as you requested, we have added new results further highlighting the benefits of our approach compared to the baselines, and 2) we improved the visibility of Figure 2 which we believe might have confused the reviewer in the original submission since some colors might not have been easily readable.\n\nTo be specific:\n-In Figure 2 (toy example), you can see that after 50K steps our method captures all of the modes while the standard GAN is missing one mode.\n-In Table 1 (MNIST), you can see that our method (10 states) generates 26% more classes compared to standard GAN training, and stabilizes the training which can be observed through the reduced variance. Please note that our approach also does better with respect to the reverse KL measure. \n-In Table 2 (CIFAR 10), you can see that our methods consistently outperform the standard GAN training with respect to the MSE measure. Concretely, the MSE of our method (25 states) is lower by 20% than the MSE of the standard training method. In terms of number of images from the test/training set with the lowest reconstruction, GAN achieves 0%, whereas our best variant achieves 82.43% and 80.33%, respectively.\n-In Table 4 (CelebA), you can see that our method outperforms the standard GAN training with respect to the number of modes that are classified by the auxiliary discriminator as not real.\nConcretely, our method improves by a factor of 50%.\nThis means that using an auxiliary discriminator based on our method, only 1400 images of the true test set were recognized by the auxiliary discriminator as fake.\nConversely, using an auxiliary discriminator on the standard method, there were 3000 images recognized by the auxiliary discriminator as fake.\n- In addition, we have now added a new section with experiments showcasing Inception Score and Frechet Dirichlet Distance (FID) for our method and show improvement over several strong baselines (Section 5.3). \n\n“From figure 2, I don't see much advantage of Checkhov GAN”\nSee our answer above, light colors indicate a mode with less mass. We therefore clearly observe that the normal GAN suffers from mode collapse, while our approach finds all the modes. We hope that by going again over the experimental part, you will find that our method does improve over the standard GAN training.\n\nRegarding comparison to other methods: WGAN is a different loss function rather than a different GAN training method. Our method can be applied to other GAN loss functions, including WGAN. Thus, WGAN is not a competitor to our approach, but rather another setup where our approach could be applied. We do agree that a comparison to WGAN is valuable and we have now added the requested results to the paper (please see Table 5, Figure 3, Figure 4 and Figure 11).\n*Applying our method on top of WGAN (denoted Chekhov WGAN in the paper), outperforms WGAN across all epochs consistently in terms of both metrics (inception score and FID). \n", "Dear Reviewer,\n\nThank you for your valuable feedback and the positive review regarding the theoretical part of the paper. We have added a significant number of experimental results to address your concerns and suggestions about the experimental part of the paper. We detail the changes made to the paper below.\n\n1) Unrolled GAN: We first would like to point out that, unlike what you mentioned in your review, unrolled GAN does not look at historical D’s but rather it looks at **future** discriminator’s updates by unrolling a few optimization steps, thus making the two algorithms very different. The unrolling procedure comes with significant drawbacks as one needs to compute future steps which will not be used to update the model parameters. In contrast, our approach makes use of past steps and is therefore less wasteful in terms of computation (note that this could be further sped-up using parallel computations, which cannot be done with Unrolled GAN). Experiments on toy datasets show the benefits of using the history (Chekhov GAN) in comparison to Unrolled GAN (Figure 7). We have now added more experimental results comparing to Unrolled GAN in terms of Inception Score and FID, where we can clearly see that our approach achieves significantly better scores (Table 5, Figure 4 and Figure 11).\n\n2) The mixture GAN approach suggested in Arora et al: We agree with the reviewer that a comparison to this work is valuable and we have now added the requested results to the updated version of the paper (Table 5). These results demonstrate that our approach achieves better results than Arora et al. when both methods are applied on top of WGAN, despite having 5 times less trainable parameters. Applying both methods on top of GAN yields comparable scores, but with notably less variance for Chekhov GAN.\n\n\n3) WGAN & WGAN-GP: As requested by the reviewer, we have added a comparison to WGAN and WGAN GP in Table 5 in the paper where we clearly see better scores for Chekhov GAN. Note that although we originally developed our approach for the vanilla GAN, a similar algorithm can be applied to the Wasserstein objective which is also a min-max objective. As a proof of concept, we also provide empirical results for our approach with the Wasserstein objective and show that it also consistently improves upon the baseline.\n\nIn summary, Chekhov GAN outperforms GAN over various metrics (MSE, number of missing modes, reverse KL divergence, number of generated classes, missing modes due to catastrophic forgetting) and achieves comparable performance in terms of Inception Score and FID, with reduced variance. At the same time, we successfully apply our algorithm on top of WGAN and achieve consistent improvement. We further show improvement across several other baselines as well. \n\nAs you have mentioned, our algorithm has theoretical guarantees which we believe are of significant interest for the GAN literature. The practical algorithm is strongly inspired by the theory (using the history of generators and discriminators and updating the parameters by taking a gradient step guided by the FTRL objective) and outperforms the baselines across several metrics and datasets. Closing the gap between the theory and practice would be an interesting direction for future work as there are clear theoretical benefits.\n", "Dear Leon,\n\nThank you for your comment!\n\nThere are two reasons for that: \n-First, we wanted to make a fair comparison to the standard training method, which uses one update rather than several upades per round.\n-Second, we have noticed that for our method, using several updates vs. a single update did not make a big difference. We therefore decided to present the experiments with a single update which is more practical.\n", "Dear Leon,\n\nThank you for your interest!\n\nIn the example that you have raised the (theoretic) FTRL approach will not lead to mode collapse. \nIn order to see why this is the case, let’s formalize your example:\n-Assume that the true data is uniformly distributed between [10,12].\n-Also assume that the generator can choose a single parameter \\mu which induces a uniform distribution between [\\mu-1, \\mu+1].\n-Assume that the discriminator may choose two parameters W and b; this induces the following classification rule: “+1” if Wx+b>0, and “0” otherwise.\n\nIn your example, the initial parameters are, \\mu = -11, and W=1, b=0. \nNow, at the second round, the generator will choose a new \\mu which minimizes its loss. In our case this would lead to some \\mu>=1 (note that FTRL does not use gradient step, but rather full minimization).\nIn the next round, the discriminator will change W and b in order to separate between the generated and true data (if possible). In the following round, the generator will again update its parameters and so on… This process will persist until \\mu converges to \\mu=11, which is the true data.\n\nThe reason that the FTRL approach converges is that it does not rely only on gradient information but rather on more global information.\n\nNote that the example you raised is indeed a hurdle for the standard GAN optimization method, and all other approaches that rely on gradients, including our practical algorithm (unless we inject additive noise to the samples that we feed into the discriminator, which is a common practice for training GANs).\n\nThe question you raised highlights the potential benefits of implementing the full FTRL approach, which we hope would inspire/be addressed in future work.", "Dear authors, \n I have one question on how the proposed approach is able to avoid mode collapse. \n Consider this simple example: the data samples are all greater than 10, the initial generated samples are all all less than -10, the initial D(x) = 0 for x<=0 and D(x) = 1 for all x>0. By gradient descent, the generator will get stuck from the very beginning, i.e., the generator will never be updated. Then even if you use the follow-the-regularizer approach, the generator still cannot be updated. How do you resolve this issue?\n \n Thanks!", " Dear authors,\n The paper proposes to train GANs using FTRL. However, in FTRL each step is a best-response action (see Eq. (3)). In practical training, the networks is trained using gradient descent . My question is that why does it not train the discriminator and the generator using multiple steps until convergence, which more corresponds to the theory?\n\nThanks!" ]
[ 7, 8, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1Yp-j1Cb", "iclr_2018_H1Yp-j1Cb", "iclr_2018_H1Yp-j1Cb", "Sy6RLv9Qf", "HkQupw5gf", "HJ66g8RgM", "Bko7pxAWz", "rycH3LVbz", "BkKnrjN-z", "iclr_2018_H1Yp-j1Cb", "iclr_2018_H1Yp-j1Cb" ]
iclr_2018_rkQkBnJAb
Improving GANs Using Optimal Transport
We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.
accepted-poster-papers
This is another paper, similar in spirit to the Wasserstein GAN and Cramer GAN, which uses ideas from optimal transport theory to define a more stable GAN architecture. It combines both a primal representation (with Sinkhorn loss) with a minibatch-based energy distance between distributions. The experiments show that the OT-GAN produces sharper samples than a regular GAN on various datasets. While more could probably be done to distinguish the model from WGANs and Cramer GANs, this paper seems like a worthwhile contribution to the GAN literature and merits publication.
train
[ "SJyW7vDgz", "SyGyMzqez", "SyLFVA3ez", "S1V_zUt7G", "ryc4f8YQf", "rJIAWLtXz", "Syr1_P3bz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper introduces a new algorithm for training GANs based on the Earth Mover’s distance. In order to avoid biased gradients, the authors use the dual form of the distance on mini-batches, to make it more robust. To compute the distance between mini batches, they use the Sinkhorn distance. Unlike the original Sinkhorn distance paper, they use the dual form of the distance and do not have biased gradients. Unlike the Cramer GAN formulation, they use a mini-batch distance allowing for a better leverage of the two distributions, and potentially decrease variance in gradients.\n\nEvaluation: the paper shows good results a battery of tasks, including a standard toy example, CIFAR-10 and conditional image generation, where they obtain better results than StackGAN. \n\nThe paper is honest about its shortcomings, in the current set up the model requires a lot of computation, with best results obtained using a high batch size.\n\nWould like to see: \n * a numerical comparison with Cramer GAN, to see whether the additional computational cost is worth the gains. \n * Cramer GAN shows an increase in diversity, would like to see an analog experiment for conditional generation, like figure 3 in the Cramer GAN paper.\n", "The paper presents a variant of GANs in which the distance measure between the generator's distribution and data distribution is a combination of two recently proposed metrics. In particular, a regularized Sinkhorn loss over a mini-batch is combined with Cramer distance \"between\" mini-batches. The transport cost (used by the Sinkhorn) is learned in an adversarial fashion. Experimental results on CIFAR dataset supports the usefulness of the method.\n\nThe paper is well-written and experimental results are supportive (state-of-the-art ?)\n\nA major practical concern with the proposed method is the size of mini-batch. In the experiment, the size is increased to 8000 instances for stable training. To what extent is this a problem with large models? The paper does not investigate the effect of small batch-size on the stability of the method. Could you please comment on this?\n\nAnother issue is the adversarial training of the transport cost. Could you please explain why this design choice cannot lead instability? \n", "There have recently been a set of interesting papers on adapting optimal transport to GANs. This makes a lot of sense. The paper makes some very good connections to the state of the art and those competing approaches. The proposal makes sense from the generative standpoint and it is clear from the paper that the key contribution is the design of the transport cost. I have two main remarks and questions.\n\n* Regarding the transport cost, the authors say that the Euclidean distance does not work well. Did they try to use normalised vectors with the squared Euclidean distance ? I am asking this question because solving the OT problem with cost defined as in c_eta is equivalent to using a *normalized squared* Euclidean distance in the feature space defined by v_eta. If the answer is yes and it did not work, then there is indeed a real contribution to using the DNN. Otherwise, the contribution has to be balanced. In either case, I would have been happy to see numbers for comparison.\n\n* The square mini batch energy distance looks very much like a maximum mean discrepancy criterion (see the work of A. Gretton), up to the sign, and also to regularised approached to MMD optimisation (see the paper of Kim, NIPS'16 and references therein). The MMD is the solution of an optimisation problem which, I suppose, has lots of connections with the dual Wasserstein GAN. The authors should elaborate on the relationships, and eventually discuss regularisation in this context.\n\n", "Regarding your two requests:\n\n* The Cramer GAN paper did not report inception scores on the usual data sets, and they do not provide code, so numerical comparison to their work is a bit difficult. I searched for public implementations of Cramer GAN, and the best one I could find is this one: https://github.com/pfnet-research/chainer-gan-lib According to the inception scores reported here, our method performs much better than Cramer GAN.\n\n* We are in correspondence with the authors of Cramer GAN about the details of this conditional generation experiment. So far we have not yet been able to replicate their exact setup. We hope to be able to include this experiment in the paper soon. Related to what you’re asking for, we have added an additional experiment to the paper investigating sample diversity and mode collapse in OT-GAN as compared to DCGAN (appendix D). What we find here is that DCGAN (as well as other variants of GAN) still shows mode collapse when training for longer periods of time. For OT-GAN we see no mode collapse, even when we keep training for a very long time. If we stop training DCGAN when the Inception score is maximal, we see no mode collapse, but the sample diversity is still lower than for OT-GAN (see figure 6).\n", "Regarding the two issues your raise:\n\n* In section 5.2 we present an experiment on CIFAR-10 where we vary the batch size used in training OT-GAN. As shown in Figure 4, small batch sizes are less stable during training, although the results are still on par with previous work on GANs. For large models we can reach the large batch sizes required for optimal results by using more GPUs and/or GPUs with more memory. Many of the other recent SOTA works in GANs use even more time and compute than we do, but it is indeed a limitation of the method as clearly indicated in the paper. In the updated paper we have expanded our discussion of this experiment, the causes behind its result, and its practical importance.\n\n* Our transport cost depends on a critic neural network v, which embeds the images (generated and real) into a latent space. As long as this embedding is one-to-one / non-degenerate, the statistical consistency guarantees associated with minimizing energy distance carry over to our setting. When the critic v is learned adversarially, the embedding could potentially degenerate and we could lose these properties. In practice, we manage to avoid this by updating the generator more often than the critic. If the critic maps two distinct inputs to similar embeddings, the generator will take advantage of this, thereby putting pressure on the critic to adapt the embedding. An alternative solution we tried is to parameterize the critic using a RevNet (Gomez et al. 2017): This way the mapping is always one-to-one by construction. Although this also works, we found it to be unnecessary when updating the generator often enough. The updated paper includes additional discussion on this point, and it also includes a new experiment (appendix C) further investigating the importance of adversarially learning the transport cost.\n", " ", "Thanks for your review! We're currently working on updating the paper, but I wanted to send you a quick reply regarding the two points you have raised:\n\n* We have now run exactly the experiment you suggest, using cosine distance in the pixel space (or equivalently squared Euclidean distance with normalized vectors) instead of in the critic-space defined by v_eta. The maximum inception score we were able to achieve using this setup on CIFAR-10 was 4.93 (compared to 8.47 achieved using the DNN critic v_eta). You can download the corresponding samples here: https://www.dropbox.com/s/e27uqj6ah7j9avq/sample_pixel_space.png?dl=1 We will include the results of this experiment in the upcoming update of the paper.\n\n* There is indeed a close connection between energy distance and MMD. When the energy distance is generalized to other distances between individual samples it becomes equivalent to MMD. This is explained in the following work, among other places: Sejdinovic, Dino, Bharath Sriperumbudur, Arthur Gretton, and Kenji Fukumizu. \"Equivalence of distance-based and RKHS-based statistics in hypothesis testing.\" The Annals of Statistics (2013): 2263-2291.\nThe novel part of our proposed minibatch energy distance is that it further generalizes the energy distance from individual samples to minibatches. This makes it conceptually different from the existing literature, including Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo. \"Examples are not enough, learn to criticize! Criticism for interpretability.\" In Advances in Neural Information Processing Systems, pp. 2280-2288. 2016. (please let us know if you were referring to a different paper) Equivalently we can say our minibatch energy distance generalizes MMD from individual samples to minibatches, but we choose to take the energy distance perspective as it more closely connects to other work in this area (e.g. Cramer-GAN). We will include this discussion + the references in the upcoming update." ]
[ 8, 6, 6, -1, -1, -1, -1 ]
[ 4, 2, 3, -1, -1, -1, -1 ]
[ "iclr_2018_rkQkBnJAb", "iclr_2018_rkQkBnJAb", "iclr_2018_rkQkBnJAb", "SJyW7vDgz", "SyGyMzqez", "Syr1_P3bz", "SyLFVA3ez" ]
iclr_2018_S1HlA-ZAZ
The Kanerva Machine: A Generative Distributed Memory
We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.
accepted-poster-papers
This paper presents a distributed memory architecture based on a generative model with a VAE-like training criterion. The claim is that this approach is easier to train than other memory-based architectures. The model seems sound, and it is described clearly. The experimental validation seems a bit limited: most of the comparisons are against plain VAEs, which aren't a memory-based architecture. The discussion of "one-shot generalization" is confusing, since the task is modified without justification to have many categories and samples per category. The experiment of Section 4.4 seems promising, but this needs to be expanded to more tasks and baselines since it's the only experiment that really tests the Kanerva Machine as a memory architecture. Despite these concerns, I think the idea is promising and this paper contributes usefully to the discussion, so I recommend acceptance.
test
[ "rJlzr-5lM", "HJ7qJh9eM", "r14ew-W-G", "rybuz4-zf", "S1JTjXWfz", "HJxcoX-GM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The generative model comprises a real-valued matrix M (with a multivariate normal prior) that serves\nas the memory for an episode (an unordered set of datapoints). For each datapoint a marginally independent\nlatent variable y_t is used to index into M and realize a conditional density\nof another latent variable z. z_t is used to generate the data.\n\nThe proposal of learning with a probabilistic memory is interesting and the framework proposed is elegant and cleanly explained. The model is evaluated on the following tasks:\n* Qualitative results on denoising and one-shot generation using the Omniglot dataset.\n* Qualitative results on sampling from the model using the CIFAR dataset.\n* Likelihood estimation on the Omniglot dataset\n\nQuestions and concerns: \n\nThe model appears novel and is interesting, the experiments, however, are lacking in that they\ndo not compare against other any recently proposed memory augmented deep generative models [Bornschein et al] and [Li et. al] (https://arxiv.org/pdf/1602.07416.pdf). At the very minimum, the paper should include a discussion and a comparison with the latter. Doing so will help better understand what is gained from using retaining a probabilistic form of memory versus a determinstic memory indexed with attention as in [Li et. al].\n\nHow does the model perform as a function of varying T (size of episodes) during training? It would be interesting to see how well the model performs in the limiting case of T=1.\n\nWhat is the task being solved in Section 4.4 by the DNC and the Kanerva machine? Please state this in the main paper.\n\nTraining and Evaluation: There is a mismatch in the training and evaluation procedure the implications of which I don't\nfully understand yet. The text states that the model was trained where each observation in an episode comprised randomly sampled datapoints. This corresponds to a generative process where (1) a memory is randomly drawn, (2) each observation in the episode is an independent draws from the memory conditioned decoder. During training,\npoints in an episode are randomly selected. At test time, (if I understand correctly, please correct me if I haven't), the model is evaluated by having multiple copies of the same test point within an episode. Is that correct? If so, doesn't that correspond to evaluating the model under a different generative assumption? Why is this OK?\n\nLikelihood evaluation: Could you expand on how the ELBO of 68.3 is computed under the model for a single test image in the Omniglot dataset? The text says that the likelihood of each data-point was divided by T (the length of the episode considered). This seems at odds with models, such as DRAW, evaluate the likelihood -- once at the end of the generative drawing process. What is the per-pixel likelihood obtained on the CIFAR dataset and what is the likelihood on a model where T=1 (for omniglot/cifar)?\n\nUsing Labels: Following up on the previous point, what happens if labelled information from Omniglot or CIFAR is used to define points within an episode during the training procedure? Does this help or hurt performance?\n\nFor the denoising comparison, how do the results compare to those obtained if you simulate a Markov Chain (sample latent state conditioned on noisy image, sample latent state, sample denoised observation, repeat using denoised observation) using a VAE?", "The paper presents the Kanerva Machine, extending an interesting older conceptual memory model to modern usage. The review of Kanerva’s sparse distributed memory in the appendix was appreciated. While the analyses and bounds of the original work were only proven when restricted to uniform and binary data, the extensions proposed bring it to modern domain of non-uniform and floating point data.\n\nThe iterative reading mechanism which provides denoising and reconstruction when within tolerable error bounds, whilst no longer analytically provable, is well shown experimentally.\nThe experiments and results on Omniglot and CIFAR provide an interesting insight to the model's behaviour with the comparisons to VAE and DNC also seem well constructed.\n\nThe discussions regarding efficiency and potential optimizations of writing inference model were also interesting and indeed the low rank approximation of U seems an interesting future direction.\n\nOverall I found the paper well written and reintroduced + reframed a relatively underutilized but well theoretically founded model for modern use.", "This paper generalizes the sparse distributed memory model of Kanerva to the Kanerva Machine by formulating a variational generative model of episodes with memory as the prior. \n\nPlease discuss the difference from other papers that implement memory as a generative model, i.e. (Bornschein, Mnih, Zoran, Rezende 2017)\n\nA probabilistic interpretation of Kanerva’s model was given before (Anderson, 1989 http://ieeexplore.ieee.org/document/118597/ ) and (Abbott, Hamrick, Griffiths, 2013). Please discuss.\n\nI found the relation to Kanerva’s original model interesting and well explained. The original model was motivated by human long term memory and neuroscience. It would be nice if the authors can provide what neuroscience implications their work has, and comment on its biological plausibility.", "The model appears novel and is interesting, the experiments, however, are lacking in that they\ndo not compare against other any recently proposed memory augmented deep generative models...the paper should include a discussion and a comparison with the latter...\n\n-- We agree that these works should be better highlighted in our manuscript. We have added a new paragraph (3rd in the Discussion section) to describe the relations with these 2 papers.\nWhile both of these models share some commonalities with our model, they also have key differences which make direct experimental comparisons problematic. As we describe in more detail in the Discussion, our paper addresses a different, although related, problem --- updating memory optimally. As you mentioned, such update is not possible with the memory in Li et al., which is fixed over the course of episodes. Similarly, the likelihoods from our model have a very different meaning from Bornschein et al., since the only ambiguity in retrieving stored patterns in their model was in the categorical addressing variable; their model stores images in memory in the form of raw-pixels. We instead store compressed embeddings in a distributed fashion. As a result, the objective function we use (eq. 2) becomes a constant 0 for the model in Bornschein et al., since the mutual information between the memory and an episode of images I(X; M) is simply the entropy of these images H(X), when all these images are directly stored in the memory.\n\nHow does the model perform as a function of varying T (size of episodes) during training? ... the limiting case of T=1.\n\n-- The performance under varying T is shown in figure 6 (right). There is a smooth rise of test loss with increasing T, and T=1 does not seem to be very different.\n\nWhat is the task being solved in Section 4.4 by the DNC and the Kanerva machine? Please state this in the main paper.\n\n-- We now clarify this in the main paper. It is the same episode storage and retrieval task as earlier in the paper, only we now look at the test regime where episode lengths are longer.\n\nTraining and Evaluation: There is a mismatch in the training and evaluation procedure the implications of which I don't fully understand yet …\n\n-- With one exception there is no mismatch in training and evaluation, though we can now see where confusion may have crept in. We have revised the description in the paper to clarify this. In general, training and testing follow the same sampling procedure in constructing the episodes. The single exception is in the omniglot one-shot generation and comparison with DNC, where we control the number of Omniglot classes in an episode during testing for illustrative purpose only [i.e. to illustrate what happens with different levels of redundancy in the input data]. For all other comparisons the train and test losses and visualisation are identical.\n\nLikelihood evaluation: ... how the ELBO of 68.3 is computed … This seems at odds with models, such as DRAW … What is the per-pixel likelihood obtained on the CIFAR dataset...?\n\n-- “T” has different meanings in our model and in DRAW. DRAW is an autoregressive model that uses T steps to construct *one* image; in our model, T is the number of images, so we divide the total log-likelihood of T (conditionally) independent images by T for comparison. The likelihood of 5329 can be converted to the per-pixel bits 4.4973.\n\n--To get the number of 68.3, we first compute the ELBO for the conditional log-likelihood of an episode with 32 images, which is log P(x_1, x_2, … x_32 | M) = 2185.6. Since log P(x_1, x_2, … x_32 | M) = log P(x_1|M) + log P(x_1|M) + ..+ log P(x_32|M) (conditional independence / exchangeability), we can compute the average ELBO for each image by dividing 2185.6 / 32 = 68.3.\n\nUsing Labels: ... what happens if labelled information from Omniglot or CIFAR is used to define points within an episode during the training procedure? \n\n-- This is an interesting point. We think it will help performance, since the additional label information may help the model further reduce redundancy. We only trained on the worst case scenario without such information.\n\nHow do the results compare to those obtained if you simulate a Markov Chain using a VAE?\n\n-- We tried iterative sampling using a VAE as well. However, iterative sampling did not improve performance with a VAE --- which is why, to the best of our knowledge, it has not been used in previous literature. In our model iterative sampling works because of the structure of the generative model (section 3.5). We now discuss this in the revision and illustrated it in a new figure (Figure 8 in the Appendix).\n\nMany thanks for the comments which have helped us improve the manuscript. If you still feel that there are issues with the manuscript that would prevent you from raising your score, please point these out so that we can address them.", "We appreciate your comments. Please let us know if you have any additional suggestions for the text or experiments that would further improve our paper, and potentially lead you to increase your score.", "Thank you for your comments.\n\nWe agree these papers should be discussed and have added new text (paragraphs 2 and 3 in the Discussion) to describe this work.\n\nWith regard to biological plausibility, we believe our main contribution is at the computational (rather than implementation) level: i.e. by providing a model that can be used in the context of complex memory tasks. As we focused on developing a functional and useful machine learning model, we don’t make any claims about biological plausibility beyond the relationship with Kanerva’s model, whose distributed structure was motivated by understanding of the brain.\n\nPlease let us know if you have any additional suggestions for the text or experiments that would further improve our paper, and potentially lead you to increase your score.\n" ]
[ 6, 7, 7, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1 ]
[ "iclr_2018_S1HlA-ZAZ", "iclr_2018_S1HlA-ZAZ", "iclr_2018_S1HlA-ZAZ", "rJlzr-5lM", "HJ7qJh9eM", "r14ew-W-G" ]
iclr_2018_r1gs9JgRZ
Mixed Precision Training
Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyper-parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half-precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to half-precision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
accepted-poster-papers
meta score: 8 The paper explores mixing 16- and 32-bit floating point arithmetic for NN training with CNN and LSTM experiments on a variety of tasks Pros: - addresses an important practical problem - very wide range of experimentation, reported in depth Cons: - one might say the novelty was minor, but the novelty comes from the extensive analysis and experiments
train
[ "rJwXkeOgM", "SkSMlWcgG", "SJQ3bonlG", "rktIavaQz", "SyoJNu2Xf", "SyCvNOnQf", "ryjSm_hQz", "S1nTzd2Qf", "SyetJ_Pbz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "The paper considers the problem of training neural networks in mixed precision (MP), using both 16-bit floating point (FP16) and 32-bit floating point (FP32). The paper proposes three techniques for training networks in mixed precision: first, keep a master copy of network parameters in FP32; second, use loss scaling to ensure that gradients are representable using the limited range of FP16; third, compute dot products and reductions with FP32 accumulation. \n\nUsing these techniques allows the authors to match the results of traditional FP32 training on a wide variety of tasks without modifying any training hyperparameters. The authors show results on ImageNet classification (with AlexNet, VGG, GoogLeNet, Inception-v1, Inception-v3, and ResNet-50), VOC object detection (with Faster R-CNN and Multibox SSD), speech recognition in English and Mandarin (with CNN+GRU), English to French machine translation (with multilayer LSTMs), language modeling on the 1 Billion Words dataset (with a bigLSTM), and generative adversarial networks on CelebFaces (with DCGAN).\n\nPros:\n- Three simple techniques to use for mixed-precision training\n- Matches performance of traditional FP32 training without modifying any hyperparameters\n- Very extensive experiments on a wide variety of tasks\n\nCons:\n- Experiments do not validate the necessity of FP32 accumulation\n- No comparison of training time speedup from mixed precision\n\nWith new hardware (such as NVIDIA’s Volta architecture) providing large computational speedups for MP computation, I expect that MP training will become standard practice in deep learning in the near future. Naively porting FP32 training recipes can fail due to the reduced numeric range of FP16 arithmetic; however by adopting the techniques of this paper, practitioners will be able to migrate their existing FP32 training pipelines to MP without modifying any hyperparameters. I expect these techniques to be hugely impactful as more people begin migrating to new MP hardware.\n\nThe experiments in this paper are very exhaustive, covering nearly every major application of deep learning. Matching state-of-the-art results on so many tasks increases my confidence that I will be able to apply these techniques to my own tasks and architectures to achieve stable MP training.\n\nMy first concern with the paper is that there are no experiments to demonstrate the necessity of FP32 accumulation. With an FP32 master copy of the weights and loss scaling, can all arithmetic be performed solely in FP16, or are there some tasks where training will still diverge?\n\nMy second concern is that there is no comparison of training-time speedup using MP. The main reason that MP is interesting is because new hardware promises to accelerate it. If people are willing to endure the extra engineering overhead of implementing the techniques from this paper, what kind of practical speedups can they expect to see from their workloads? NVIDIA’s marketing material claims that the Tensor Cores in the V100 offer an 8x speedup over its general-purpose CUDA cores (https://www.nvidia.com/en-us/data-center/tesla-v100/). Since in this paper some operations are performed in FP32 (weight updates, batch normalization) and other operations are bound by memory and not compute bandwidth, what kinds of speedups do you see in practice when moving from FP32 to MP on V100?\n\nMy other concerns are minor. Mandarin speech recognition results are reported on “our internal test set”. Is there any previously published work on this dataset, or any publicly available test set for this task?\n\nThe notation around the Inception architectures should be clarified. According to [3] and [4], “Inception-v1” and “GoogLeNet” both refer to the architecture used in [1]. The architecture used in [2] is referred to as “BN-Inception” by [3] and “Inception-v2” by [4]. “Inception-v3” is the architecture from [3], which is not currently cited. To improve clarity in Table 1, I suggest renaming “GoogLeNet” to “Inception-v1”, changing “Inception-v1” to “Inception-v2”, and adding explicit citations to all rows of the table.\n\nIn Section 4.3 the authors note that “half-precision storage format may act as a regularizer during training”. Though the effect is most obvious from the speech recognition experiments in Section 4.3, MP also achieves slightly higher performance than baseline for all ImageNet models but Inception-v1 and for both object detection models; these results add support to the idea of FP16 as a regularizer.\n\nMinor typos:\nSection 3.3, Paragraph 3: “either FP16 or FP16 math” -> “either FP16 or FP32 math”\nSection 4.1, Paragraph 4: “ pre-ativation” -> “pre-activation”\n\nOverall this is a strong paper, and I believe that it will be impactful as MP hardware becomes more widely used.\n\n\nReferences\n\n[1] Szegedy et al, “Going Deeper with Convolutions”, CVPR 2015\n[2] Ioffe and Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, ICML 2015\n[3] Szegedy et al, “Rethinking the Inception Architecture for Computer Vision”, CVPR 2016\n[4] Szegedy et al, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning”, ICLR 2016 Workshop", "The paper provides methods for training deep networks using half-precision floating point numbers without losing model accuracy or changing the model hyper-parameters. The main ideas are to use a master copy of weights when updating the weights, scaling the loss before back-prop and using full precision variables to store products. Experiments are performed on a large number of state-of-art deep networks, tasks and datasets which show that the proposed mixed precision training does provide the same accuracy at half the memory.\n\nPositives\n- The experimental evaluation is fairly exhaustive on a large number of deep networks, tasks and datasets and the proposed training preserves the accuracy of all the tested networks at half the memory cost.\n\nNegatives\n- The overall technical contribution is fairly small and are ideas that are regularly implemented when optimizing systems.\n- The overall advantage is only a 2x reduction in memory which can be gained by using smaller batches at the cost of extra compute. ", "The paper presents three techniques to train and test neural networks using half precision format (FP16) while not losing accuracy. This allows to train and compute networks faster, and potentially create larger models that use less computation and energy.\n\nThe proposed techniques are rigorously evaluated in several tasks, including CNNs for classification and object detection, RNNs for machine translation, language generation and speech recognition, and generative adversarial networks. The paper consistently shows that the accuracy of training and validation matches the baseline using single precision (FP32), which is the common practice.\n\nThe paper is missing results comparing training and testing speeds in all these models, to illustrate the benefits of using the proposed techniques. It would be very valuable to add the baseline wall-time to the tables, together with the obtained wall-time for training and testing using the proposed techniques. ", "We've added a new revision of the paper that addresses the following points:\n\n- Corrected the typos pointed out by reviewer #1\n- Added a reference to Inception v3, modified the classification CNN table to include references and both names for googlenet. - Added improved accuracy numbers for ResNet-50\n- Added a paragraph to the conclusion on operation speedups, etc.\n\n\nWe thank the reviewers for their helpful comments and feedback. \n", "Hello Stephen,\n\nThanks for pointing out that the advantage of this technique is not limited to reduction in memory only. We will add some more statements in the paper highlighting the potential speedup with hardware that supports mixed precision training. ", "We thank the reviewer for the feedback. As newer hardware such as Nvidia’s V100 and TitanV become more widely available, we should be able to see a speedup in training time. Performance results for GEMM, RNN, and CNN layers are available at DeepBench. Depending on the layer size and batch size, MP hardware can achieve 2~6x speedup for a given layer, we will add this information to the paper. We are working on measuring the improvement in end-to-end model training using MP hardware and more optimized libraries and frameworks - the studies in this paper used either older hardware, or Volta GPUs but very early libraries and frameworks with MP support. These measurements are targeted for a subsequent publication as they couldn’t make the ICLR deadline. \n \nWhen it comes to the need for FP32 accumulation, while some networks did not need it others lost a few percentage points of accuracy when accumulating in fp16. We will add this mention to the paper, but to maximize the success of initial training in MP we recommend employing all three proposed techniques.\n \nThank you for pointing out typos, we will address them in this paper.\n", "Thank you for your review and valuable feedback. We are working on obtaining speedup numbers for mixed precision training with libraries and training frameworks that have been more extensively optimized for mixed precision (experiments in this study that were run on Volta GPUs used libraries and frameworks that had preliminary optimization for mixed precision). \n\nInitial performance numbers are available in DeepBench which indicate a 2~6x speedup for an operation depending on layer size and batch size, as long as the layer is not limited by latency (as stated in the paper, mixed-precision improves performance for 2 out of 3 potential performance limiters - memory or arithmetic throughput, with latency being the third one). For layers limited by memory bandwidth, as you point out, upper bound on speedup is 2x. The upper bound on speedups on Volta GPUs is 8x, if the operation is limited by floating point arithmetic. Full network speedups will be somewhat lower, depending on how many layers are limited by memory bandwidth or latency.\n", "Thank you for the review and comments. The focus of the studies in the paper, as you point out, was to describe and validate the procedure for training with mixed precision without losing accuracy. Experiments were run with libraries and frameworks that had preliminary support for mixed precision. As shown in DeepBench (https://github.com/baidu-research/DeepBench), depending on the layer size and batch size, MP hardware can achieve 2~6x speedup for a layer that’s not latency-limited. We will add this mention and pointer to DeepBench results to the paper. Measuring end to end speedups with more optimized frameworks is the focus for future work.\n", "You note in negatives that \"the overall advantage is only a 2x reduction in memory\". The paper notes (though only in the introduction section) that \"Performance (speed) ... is limited by one of three factors: arithmetic bandwidth, memory bandwidth, or latency\", with reduced precision helping two. Specifically, FP16 improves memory bandwidth by only requiring half the data to be shuffled about and that on modern GPUs the FP16 throughput can be 2 to 8 times faster than FP32. Hence, the potential benefit is actually far more than just reducing memory, though the methods and techniques noted in the paper are required in order to have models that can sanely train using FP16." ]
[ 8, 5, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1gs9JgRZ", "iclr_2018_r1gs9JgRZ", "iclr_2018_r1gs9JgRZ", "iclr_2018_r1gs9JgRZ", "SyetJ_Pbz", "rJwXkeOgM", "SkSMlWcgG", "SJQ3bonlG", "SkSMlWcgG" ]
iclr_2018_Sy8XvGb0-
Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.
accepted-poster-papers
This paper clearly surveys a set of methods related to using generative models to produce samples with desired characteristics. It explores several approaches and extensions to the standard recipe to try to address some weaknesses. It also demonstrates a wide variety of tasks. The exposition and figures are well-done.
train
[ "S1A-vIcgf", "HyRPZlYeG", "HkyYzWcxf", "Byf9aN6Mf", "rJLvjqOzM", "Hk-S95OzM", "Sy5Yu9_Mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "UPDATE: I think the authors' rebuttal and updated draft address my points sufficiently well for me to update my score and align myself with the other reviewers.\n\n-----\n\nORIGINAL REVIEW: The paper proposes a method for learning post-hoc to condition a decoder-based generative model which was trained unconditionally. Starting from a VAE trained with an emphasis on good reconstructions (and at the expense of sample quality, via a small hard-coded standard deviation on the conditional p(x | z)), the authors propose to train two \"critic\" networks on the latent representation:\n\n1. The \"realism\" critic receives either a sample z ~ q(z) (which is implicitly defined as the marginal of q(z | x) over all empirical samples) or a sample z ~ p(z) and must tell them apart.\n2. The \"attribute\" critic receives either a (latent code, attribute) pair from the dataset or a synthetic (latent code, attribute) pair (obtained by passing both the attribute and a prior sample z ~ p(z) through a generator) and must tell them apart.\n\nThe goal is to find a latent code which satisfies both the realism and the attribute-exhibiting criteria, subject to a regularization penalty that encourages it to stay close to its starting point.\n\nIt seems to me that the proposed realism constraint hinges exclusively on the ability to implictly capture the marginal distribution q(z) via a trained discriminator. Because of that, any autoencoder could be used in conjunction with the realism constraint to obtain good-looking samples, including the identity encoder-decoder pair (in which case the problem reduces to generative adversarial training). I fail to see why this observation is VAE-specific. The authors do mention that the VAE semantics allow to provide some weak form of regularization on q(z) during training, but the way in which the choice of decoder standard deviation alters the shape of q(z) is not explained, and there is no justification for choosing one standard deviation value in particular.\n\nWith that in mind, the fact that the generator mapping prior samples to \"realistic\" latent codes works is expected: if the VAE is trained in a way that encourages it to focus almost exclusively on reconstruction, then its prior p(z) and its marginal q(z) have almost nothing to do with each other, and it is more convenient to view the proposed method as a two-step procedure in which an autoencoder is first trained, and an appropriate prior on latent codes is then learned. In other words, the generator represents the true prior by definition.\n\nThe paper is also rather sparse in terms of comparison with existing work. Table 1 does compare with Perarnau et al., but as the caption mentions, the two methods are not directly comparable due to differences in attribute labels.\n\nSome additional comments:\n\n- BiGAN [1] should be cited as concurrent work when citing (Dumoulin et al., 2016).\n- [2] and [3] should be cited as concurrent work when citing (Ulyanov et al., 2016).\n\nOverall, the relative lack of novelty and comparison with previous work make me hesitant to recommend the acceptance of this paper.\n\nReferences:\n\n[1] Donahue, J., Krähenbühl, P., and Darrell, T. (2017). Adversarial feature learning. In Proceedings of the International Conference on Learning Representations.\n[2] Li, C., and Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision.\n[3] Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision.", "This paper considers the problem of generating conditional samples from unconditional models, such that one can query the learned model with a particular set of attributes to receive conditional samples. Key to achieving this is the introduction of a realism constraint that encourages samples to be more realistic without degrading their reconstruction and a critic which identifies regions of the latent space with targeted attributes. Generating conditional samples then involves finding points in the latent space which satisfy both the realism constraint and the critic. This is carried out either used gradient-based optimization or using an actor function which tries to amortize this process.\n\nThis paper is clearly on a timely topic and addresses an important problem. The low-level writing is good and the paper uses figures effectively to explain its points. The qualitative results presented are compelling and the approaches taken seem reasonable. On the downside, the quantitative evaluation of method does not seem very thorough and the approach seems quite heuristical at times. Overall though, the paper seems like a solid step in a good direction with some clearly novel ideas.\n\nMy two main criticisms are as follows\n1. The evaluation of the method is generally subjective without clear use of baselines or demonstration of what would do in the absence of this work - it seems like it works, but I feel like I have a very poor grasp of relative gains. There is little in the way of quantitative results and no indication of timing is given at any point. Given that the much of the aim of the work is to avoid retraining, I think it is clear to show that the approach can be run sufficiently quickly to justify its approach over naive alternatives.\n\n2. I found the paper rather hard to follow at times, even though the low-level writing is good. I think a large part of this is my own unfamiliarity with the literature, but I also think that space has been prioritized to showing off the qualitative results at the expense of more careful description of the approach and the evaluation methods. This is a hard trade-off to juggle but I feel that the balance is not quite right at the moment. I think this is a paper where it would be reasonable to go over the soft page limit by a page or so to provide more precise descriptions. Relatedly, I think the authors could do a better job of linking the different components of the paper together as they come across a little disjointed at the moment.", "# Paper overview:\nThis paper presents an analysis of a basket of approaches which together enable one to sample conditionally from a class of \ngenerative models which have been trained to match a joint distribution. Latent space constraints (framed as critics) are learned which confine the generating distribution to lie in a conditional subspace, which when combined with what is termed a 'realism' constraint enables the generation of realistic conditional images from a more-or-less standard VAE trained to match the joint data-distribution.\n\n'Identity preserving' transformations are then introduced within the latent space, which allow the retrospective minimal modification of sample points such that they lie in the conditional set of interest (or not). Finally, a brief foray into unsupervised techniques for learning these conditional constraints is made, a straightforward extension which I think clouds rather than enlightens the overall exposition.\n\n# Paper discussion:\nI think this is a nicely written paper, which gives a good explanation of the problem and their proposed innovations, however I am curious to see that the more recent \"Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space\" by Nguyen et al. was not cited. This is an empirically very successful approach for conditional generation at 'test-time'. \n\nOther minor criticisms include:\n* I find the 'realism' constraint a bit weak, but perhaps it is simply a naming issue. Did you experiment with alternative approaches for encouraging marginal probability mass?\n\n* The regularisation term L_dist, why this and not log(1 + exp(z' - z)) (or many arbitrary others)? \n\n* The claim of identity preservation is (to me) a strong one: it would truly be hard to minimise the trajectory distance wrt. the actual 'identity' of the subject.\n\n* For Figure 6 I would prefer a different colourscheme: the red does not show up well on screen.\n\n* \"Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins\" -> this is debateable, there are many papers which employ various methods to (attempt to) alleviate this issue.\n\n\n# Conclusion:\nI think this is a nice piece of work, if the authors can confirm why \"Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space\" is not placed relative to this work in the paper, I would be happy to see it published. If stuck for space, I would personally recommend moving the one-shot generation section to the appendix as I do not think it adds a huge amount to the overall exposition.", "Thanks for your response and the revision. I like the updates, particularly Figure 1 which I think is now much better. Just in case it wasn't clear, I had a somewhat silly typo in my original review where it should have read \"I think it is key\" rather than \"I think it is clear\", sorry about that. I'm happy with your updates to that comment anyway though.\n\nI still feel the paper is relatively weak in the level of quantitative comparison to previous methods. However, I appreciate that such comparison is difficult to make for this work and I don't have any reasonable concrete suggestions to improve this.", "Thank you for your time and insight in your review. We've done our best to address your concerns with paper revisions and in the comments below:\n\n\n> ”the way in which the choice of decoder standard deviation alters the shape of q(z) is not explained, and there is no justification for choosing one standard deviation value in particular.”\n\nThis was not made sufficiently clear in the original version. We chose a standard deviation parameter of 0.1 because it maximizes the ELBO. Using ELBO maximization as a hyperparameter selection scheme is a very natural and well-established practice (cf. Bishop's 2006 Pattern Recognition and Machine Learning textbook, for example). We have updated the text to highlight this and added Table 4 to the appendix, which shows the very significant improvement in ELBO from sigma=1.0 to sigma=0.1.\n\n\n> “any autoencoder could be used in conjunction with the realism constraint to obtain good-looking samples…”\n\nThis is true enough, and worth emphasizing—our contributions are not specific to VAEs, but can be used to generate good-looking conditional samples from pretrained classical autoencoders. We have added Figure 15 to the supplement, which explores what happens when the VAE’s sigma parameter goes to 0 (equivalent to a classical autoencoder). We obtain reasonably good conditional samples with high-frequency spatial artifacts.\n\nWe focused on VAEs rather than classical AEs both because they have natural sampling semantics and because they produced slightly better results. We believe this is because the KL divergence term encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality). This penalty encourages more of the latent space to map to reasonable-looking images.\n\n\n> “...including the identity encoder-decoder pair (in which case the problem reduces to generative adversarial training).”\n\nThis is an interesting observation, and may be true for the simplest version of our approach (although the identity mapping would stretch the definition of “latent” space). But it breaks down when we regularize the GAN to not move too far from the input z vector, which we found was essential to combat mode collapse and find identity-preserving transformations. In that case, it is essential that Euclidean distance in latent space be more meaningful than distance in pixel space, making the identity “autoencoder” a poor choice.\n\n\n> “the way in which the choice of decoder standard deviation alters the shape of q(z) is not explained”\n\nSmaller standard deviations will lead to lower-variance posteriors, and therefore a more concentrated q(z). This may not be obvious to all readers, so we updated the text to emphasize it, and added Supplemental Figure 16, which demonstrates the effect experimentally. \n\n\n> “The paper is also rather sparse in terms of comparison with existing work. Table 1 does compare with Perarnau et al., but as the caption mentions, the two methods are not directly comparable due to differences in attribute labels.”\n\nWe do our best to find work with which to compare, and match experimental conditions, however, there are not well established benchmarks for this type of task. Unfortunately, Perarnau et al. do not list the specific attributes that they selected as most salient, so an exact comparison is not possible. We do our best to match conditions, and provide a list our 10 salient features in supplemental Table 3 for future comparison.\n\n\n> “- BiGAN [1] should be cited as concurrent work when citing (Dumoulin et al., 2016). [2] and [3] should be cited as concurrent work when citing (Ulyanov et al., 2016).”\n\nThank you for bringing these citations to our attention. They are indeed concurrent work with Dumoulin et al.’s and Ulyanov et al.’s work, and we have cited them as such.\n\n", "Thank you for your review. We've incorporated changes to the paper and respond to your main points below:\n\n\n> “I am curious to see that the more recent \"Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space\" by Nguyen et al. was not cited. This is an empirically very successful approach for conditional generation at 'test-time'.” \n\n* Thank you for highlighting the paper by Nguyen et al. It is indeed relevant and we have added a citation to the main text. For more context, we highlight several key differences between the papers here below.\n* PPGNs require a high-quality pretrained image-space classifier. This makes them less applicable to domains where very large labeled datasets are unavailable.\n* To generate samples, PPGNs need to apply iterative gradient-based optimization in image space, each step of which requires expensive backpropagation through both a powerful CNN classifier and the generator network. By contrast, our iterative optimization procedure is done entirely in the latent space, which allows our critic networks to be much smaller and dramatically reduces the cost per iteration. Furthermore, our amortized GAN-based sampling approach can generate samples with no iterative optimization at all.\n* PPGNs must backprop through the full generative process, which limits their ability to use non-differentiable generator networks such as the autoregressive VAE we discuss in section 6. Our approach easily handles this non-differentiability, because it operates entirely in the continuous latent space.\n* Finally, we feel that our approach is simpler than PPGNs, in which three DAEs are trained to minimize a stochastic four-term loss.\n\n\n> “I find the 'realism' constraint a bit weak, but perhaps it is simply a naming issue. Did you experiment with alternative approaches for encouraging marginal probability mass?”\n\nWe considered the name “marginal posterior constraint” which was more specific, but less concise. Since the marginal posterior, q(z), corresponds to real datapoints, we consider “realism” to be a fair name for an implicit constraint that makes samples more similar to q(z).\n\nAs we note in the future work section, there are other ways to constrain sampling to the marginal posterior, such as learning an explicit autoregressive density model (indeed, van den Oord et al. proposed just such an approach in their very recent paper “Neural Discrete Representation Learning”, although they argued that using discrete latent variables was essential to their success). We feel that combining these approaches is an interesting avenue to consider, but for simplicity we focused on implicit constraints.\n\n\n> “The regularisation term L_dist, why this and not log(1 + exp(z' - z)) (or many arbitrary others)?” \n\nThe form of L_dist was inspired by the log-density of a student-t distribution, which we chose because it penalizes outliers less than the more obvious MSE regularizer (we found MSE regularization to be quite sensitive to the hyperparameter lambda_dist). There are indeed a number of other similarly heavy-tailed functions that could work; we did not experiment extensively with these, but a better choice may well exist.\n\n\n> “The claim of identity preservation is (to me) a strong one: it would truly be hard to minimise the trajectory distance wrt. the actual 'identity' of the subject.”\n\nIndeed, without conditioning on explicit labels it is hard to rigorously define identity preservation, let alone enforce it. We use the phrase “identity preserving” to emphasize that we are trying to match not only attributes, but also whatever other latent structure was discovered by the unconditional generative model. Empirically, we feel this approach produces results that, while not perfect, match intuitive notions of identity preservation much better than models that only attempt to match attributes.\n\n\n> “For Figure 6 I would prefer a different colourscheme: the red does not show up well on screen.”\n\nPoint well taken. We’ve changed to grey so that it looks like an extension of the black keys on the piano and contrasts more with the red notes which are out of the key of C.", "Thank you for your time and expertise in your review, we've addressed the key points below:\n\n> “Given that the much of the aim of the work is to avoid retraining, I think it is clear to show that the approach can be run sufficiently quickly to justify its approach over naive alternatives.”\n\nThank you for highlighting that computational efficiency is indeed one of the strengths of this approach. Retraining the whole VAE can be shortcut by training a much smaller and less expensive actor-critic pair on user preferences. While our initial experiments did not focus on computational efficiency, we have since repeated the experiments and found similar results (Supplemental Figure 14 and Table 1), are achievable with a much smaller model (~85x fewer parameters than original generator / discriminator, ~2884x fewer FLOPS/iter than training the VAE). We have updated Table 1 and the main text to emphasize this. \n\n\n> 2. “...I think the authors could do a better job of linking the different components of the paper together as they come across a little disjointed at the moment”\n\nWe agree that there are several interwoven elements to the story. To better summarize and clarify the experimental design we have overhauled and streamlined the visual depiction in Figure 1.\n\n" ]
[ 7, 7, 7, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_Sy8XvGb0-", "iclr_2018_Sy8XvGb0-", "iclr_2018_Sy8XvGb0-", "Sy5Yu9_Mz", "S1A-vIcgf", "HkyYzWcxf", "HyRPZlYeG" ]
iclr_2018_ByOExmWAb
MaskGAN: Better Text Generation via Filling in the _______
Neural text generation models are often autoregressive language models or seq2seq models. Neural autoregressive and seq2seq models that generate text by sampling words sequentially, with each word conditioned on the previous model, are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of sample quality. Language models are typically trained via maximum likelihood and most often with teacher forcing. Teacher forcing is well-suited to optimizing perplexity but can result in poor sample quality because generating text requires conditioning on sequences of words that were never observed at training time. We propose to improve sample quality using Generative Adversarial Network (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally to designed to output differentiable values, so discrete language generation is challenging for them. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic text samples compared to a maximum likelihood trained model.
accepted-poster-papers
This paper makes progress on the open problem of text generation with GANs, by a sensible combination of novel approaches. The method was described clearly, and is somewhat original. The only problem is the hand-engineering of the masking setup.
train
[ "rkgAfEoeG", "S1K1k4wVM", "HJrrYeDNf", "SytHGsLVG", "Sy4HaTtlz", "HyHlSKjlG", "HJtlpRN7f", "ByoC3CNXf", "H1tBxZ_Mz", "B1V31Wdff", "r1k4JZOGM" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Generating high-quality sentences/paragraphs is an open research problem that is receiving a lot of attention. This text generation task is traditionally done using recurrent neural networks. This paper proposes to generate text using GANs. GANs are notorious for drawing images of high quality but they have a hard time dealing with text due to its discrete nature. This paper's approach is to use an actor-critic to train the generator of the GAN and use the usual maximum likelihood with SGD to train the discriminator. The whole network is trained on the \"fill-in-the-blank\" task using the sequence-to-sequence architecture for both the generator and the discriminator. At training time, the generator's encoder computes a context representation using the masked sequence. This context is conditioned upon to generate missing words. The discriminator is similar and conditions on the generator's output and the masked sequence to output the probability of a word in the generator's output being fake or real. With this approach, one can generate text at test time by setting all inputs to blanks. \n\nPros and positive remarks: \n--I liked the idea behind this paper. I find it nice how they benefited from context (left context and right context) by solving a \"fill-in-the-blank\" task at training time and translating this into text generation at test time. \n--The experiments were well carried through and very thorough.\n--I second the decision of passing the masked sequence to the generator's encoder instead of the unmasked sequence. I first thought that performance would be better when the generator's encoder uses the unmasked sequence. Passing the masked sequence is the right thing to do to avoid the mismatch between training time and test time.\n\nCons and negative remarks:\n--There is a lot of pre-training required for the proposed architecture. There is too much pre-training. I find this less elegant. \n--There were some unanswered questions:\n (1) was pre-training done for the baseline as well?\n (2) how was the masking done? how did you decide on the words to mask? was this at random?\n (3) it was not made very clear whether the discriminator also conditions on the unmasked sequence. It needs to but \n that was not explicit in the paper.\n--Very minor: although it is similar to the generator, it would have been nice to see the architecture of the discriminator with example input and output as well.\n\n\nSuggestion: for the IMDB dataset, it would be interesting to see if you generate better sentences by conditioning on the sentiment as well.\n", "I acknowledge your rebuttal. I am updating my rating from 6 to 7 in light of it.", "Thanks again! Before the review process concludes, do you have any outstanding questions regarding our rebuttal which includes the additional experiments on pretraining and our chosen masking strategy? In particular, we'd be interested in your opinion on the MaskGAN algorithm in light of evidence that it functions with less pretraining. Finally, our paper revision seeks to further strengthen our result by comparing against LSTM baselines. ", "I am happy with the author's revision. The points I raised earlier have been addressed appropriately. The importance of the MaskGAN mechanism has been highlighted and the description of the reinforcement learning training part has been clarified.\n\nMy other concern with the Masking strategy has been addressed and the two masking strategies have been described in detail.", "Quality: The work focuses on a novel problem of generating text sample using GAN and a novel in-filling mechanism of words. Using GAN to generate samples in adversarial setup in texts has been limited due to the mode collapse and training instability issues. As a remedy to these problems an in-filling-task conditioning on the surrounding text has been proposed. But, the use of the rewards at every time step (RL mechanism) to employ the actor-critic training procedure could be challenging computationally challenging.\n\nClarity: The mechanism of generating the text samples using the proposed methodology has been described clearly. However the description of the reinforcement learning step could have been made a bit more clear.\n\nOriginality: The work indeed use a novel mechanism of in-filling via a conditioning approach to overcome the difficulties of GAN training in text settings. There has been some work using GAN to generate adversarial examples in textual context too to check the robustness of classifiers. How this current work compares with the existing such literature?\n\nSignificance: The research problem is indeed significant since the use of GAN in generating adversarial examples in image analysis has been more prevalent compared to text settings. Also, the proposed actor-critic training procedure via RL methodology is indeed significant from its application in natural language processing.\n\npros:\n(a) Human evaluations applications to several datasets show the usefulness of MaskGen over the maximum likelihood trained model in generating more realistic text samples.\n(b) Using a novel in-filling procedure to overcome the complexities in GAN training.\n(c) generation of high quality samples even with higher perplexity on ground truth set.\n\ncons:\n(a) Use of rewards at every time step to the actor-critic training procure could be computationally expensive.\n(b) How to overcome the situation where in-filling might introduce implausible text sequences with respect to the surrounding words?\n(c) Depending on the Mask quality GAN can produce low quality samples. Any practical way of choosing the mask?", "This paper proposes MaskGAN, a GAN-based generative model of text based on\nthe idea of recovery from masked text. \nFor this purpose, authors employed a reinceforcement learning approach to\noptize a prediction from masked text. Moreover, authors argue that the \nquality of generated texts is not appropriately measured by perplexities,\nthus using another criterion of a diversity of generated n-grams as well as\nqualitative evaluations by examples and by humans.\n\nWhile basically the approach seems plausible, the issue is that the result is\nnot compared to ordinary LSTM-based baselines. While it is better than a \nconterpart of MLE (MaskedMLE), whether the result is qualitatively better than\nordinary LSTM is still in question.\n\nIn fact, this is already appearent both from the model architectures and the\ngenerated examples: because the model aims to fill-in blanks from the text\naround (up to that time), generated texts are generally locally valid but not\nalways valid globally. This issue is also pointed out by authors in Appendix\nA.2. \nWhile the idea of using mask is interesting and important, I think if this\nidea could be implemented in another way, because it resembles Gibbs sampling\nwhere each token is sampled from its sorrounding context, while its objective\nis still global, sentence-wise. As argued in Section 1, the ability of \nobtaining signals token-wise looks beneficial at first, but it will actually\nbreak a global validity of syntax and other sentence-wise phenoma.\n\nBased on the arguments above, I think this paper is valuable at least\nconceptually, but doubt if it is actually usable in place of ordinary LSTM\n(or RNN)-based generation.\nMore arguments are desirable for the advantage of this paper, i.e. quantitative\nevaluation of diversity of generated text as opposed to LSTM-based methods.\n\n*Based on the rebuttals and thorough experimental results, I modified the global rating.", "We additionally added results comparing MaskGAN and MaskMLE samples against those from a baseline LSTM language model. Tables 7 and 8 have been updated to include these results.", "We additionally added results comparing MaskGAN and MaskMLE samples against those from a baseline LSTM language model. Tables 7 and 8 have been updated to include these results.", "Thank you for your review! \n\n*Importance and Computational Cost of Actor-Critic*\nWe’d like to address your concern about the importance and the computational challenges of the actor-critic method. We believe that this was a crucial component to get the results we did and it was achieved with no significant additional computational cost. \n\nIn building architectures for this novel task, we were contending with both reinforcement learning challenges as well as GAN-mode collapse issues. Specifically, variance in the gradients to the Generator was a major issue. To remedy this, we simply added a value estimator as an additional head on the Discriminator. The critic estimates the expected value of the current state, conditioned on everything produced before. This is very lightweight in terms of additional parameters since we’re sharing almost all parameters with the Discriminator. We found that using this reduced the advantage to the Generator by over an order of magnitude. This was a critical piece of efficiently training our algorithm. We compared the performance of this actor-critic approach against a standard exponential moving average baseline and found there to be no significant difference in training step time.\n\n*Clarity*\nThanks and we updated the writing to more clearly delineate the reinforcement learning training.\n\n*Originality*\nAs far as we are aware, no work has considered this conditional task where a per-time-step reward is architected in. Additionally, our use of an actor-critic methodology in GAN-training is a minimally explored avenue. Finally, the existing literature on textual adversarial examples focus on classifier accuracy and generally don't do human evaluations on the quality of the generated examples as we do.\n\n*Masking Strategy*\nWe predominantly evaluated two masking strategies at training time. One was a completely random mask and the other were contiguous masks, where blocks of adjacent words are masked. Though we were able to train with both strategies, we found that the random mask was more difficult to train. However, and more significantly, the random mask doesn’t share the primary benefit of GAN autoregressive text generation (termed free-running mode in the literature). One can see this because for a given percentage of words to omit, a Generator given the random mask will fill-in shorter sequences autoregressively than the contiguous mask. GAN-training allows our training and inference procedure to be the same, in contrast to teacher-forcing in the maximum likelihood training. Therefore, we generally found it beneficial to allow the model to produce long sequences, conditioned on what it had produced before, rather than filling in short disjoint sequences or or even single tokens. ", "Thank you for your review!\n\n*Pretraining*\nWe found evidence that this architecture could replicate simple data distributions without pretraining and found it could perform reasonably on larger data sets, however, in the interest of computational efficiency, we relied on pretraining procedures, similar to other work in this field. All our baselines also included pre-training.\n\nTo test whether all the pretraining steps were necessary, we experimented with training MaskMLE and MaskGAN on PTB without initializing from a pretrained language model. The perplexity of the generated samples were 117 without pretraining and 126 with pretraining, showing that at least for PTB language model pretraining does not appear to be necessary.\n\nModels trained from scratch were found to more computationally intense. By building off near state-of-the-art language models, we were able to rapidly iterate over architectures thanks to faster convergence. Additionally, we were working at a word-level representation where our softmax is producing a distribution over O(10K)-tokens. Attempting reinforcement learning methods from scratch on an ‘action space’ of this magnitude is prone to extreme variance. The likelihood of producing a correct token and receiving a positive reward is exceedingly rare; therefore, the model spends a long time exploring the space with almost always negative rewards. As a related and budding research avenue, one could consider the properties and characteristics of exclusively GAN-trained language models. \n\n*Masking Strategy*\nWe predominantly evaluated two masking strategies at training time. One was a completely random mask and the other was a contiguous mask, where blocks of adjacent words are masked. Though we were able to train with both strategies, we found that the random mask was more difficult to train. However, and more significantly, the random mask doesn’t share the primary benefit of GAN autoregressive text generation (termed free-running mode in the literature). One can see this because for a given percentage of words to omit, a Generator given the random mask will fill-in shorter sequences autoregressively than the contiguous mask will. GAN-training allows our training and inference procedure to be the same, in contrast to teacher-forcing in maximum likelihood training. Therefore, we generally found it beneficial to allow the model to produce long sequences, conditioned on what it had produced before, rather than filling in short disjoint sequences or or even single tokens. ", "Thank you for your review and comments!\n\nWe reiterate your two primary concerns as the following:\n1. A standard LSTM-baseline of a non-masked task should be included.\n2. The MaskGAN algorithm is enforcing only local consistency within text, but does not aid with global consistency. \n\n*Standard Baselines*\nTo address your first concern, we added a thorough human evaluation of a language model (LM) LSTM baseline. We use the samples produced from our Variational Dropout-LSTM language model and evaluate the resulting sample quality for both the PTB and IMDB datasets using Amazon Mechanical Turk. You can see these results updated in our paper in Table 7 and Table 8. We demonstrate that the MaskGAN training algorithm results in improvements over both the language model and the MaskMLE benchmarks on all three metrics: grammaticality, topicality and overall quality. In particular, MaskGAN samples are preferred over LM LSTM baseline samples, 58.0% vs 15.7% of the time for IMDB reviews.\n\n*Local vs. Global Consistency*\nIn regards to your comment on Gibbs sampling, we do agree that this would likely be a valid and helpful technique for inference. In our paper, we in-fill our samples autoregressively from left to right, as is conventional in language modeling. (This approach allows for fast unconditional generation as with the LM baseline and is what our human evaluation is targeted at). This autoregressive process relies on the attention module of our decoder in order to provide full context during the sampling process. For instance, when the decoder is producing the probability distribution over token x_t, it is attending over the future context to create this distribution. If the subject of the sentence is known to be a female leader and the model is generating a pronoun, the model has the ability to attend to the future context and select the correct gender-matched pronoun. If the model fails to do this, a well-trained discriminator will ascribe a low reward to this pronoun selection which in turn will generate useful gradients through the attention mechanism. We have observed this behaviour during preliminary experiments. We argue that global consistency is built into this architecture but to solve the boundary problems in appendix C.2, allowing the autoregressive model decide when to stop instead of forcing it to output a fix number of words may resolve some of the syntactic issues. \n\nWe also expand table 6 to show the diversity of the generated samples compared to a standard LM-LSTM." ]
[ 7, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 3, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByOExmWAb", "HJrrYeDNf", "ByoC3CNXf", "H1tBxZ_Mz", "iclr_2018_ByOExmWAb", "iclr_2018_ByOExmWAb", "H1tBxZ_Mz", "B1V31Wdff", "Sy4HaTtlz", "rkgAfEoeG", "HyHlSKjlG" ]
iclr_2018_B1jscMbAW
Divide and Conquer Networks
We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale- invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide- and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.
accepted-poster-papers
The paper proposes a unique network architecture that can learn divide-and-conquer strategies to solve algorithmic tasks, mimicking a class of standard algorithms. The paper is clearly written, and the experiments are diverse. It also seems to point in the direction of a wider class of algorithm-inspired neural net architectures.
train
[ "H1wZwQwef", "ByZKLz5gz", "B1Qwc-LWf", "SkeOr1imz", "rkGXn5S-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper proposes to add new inductive bias to neural network architecture - namely a divide and conquer strategy know from algorithmics. Since introduced model has to split data into subsets, it leads to non-differentiable paths in the graph, which authors propose to tackle with RL and policy gradients. The whole model can be seen as an RL agent, trained to do splitting action on a set of instances in such a way, that jointly trained predictor T quality is maximised (and thus its current log prob: log p(Y|P(X)) becomes a reward for an RL agent). Authors claim that model like this (strengthened with pointer networks/graph nets etc. depending on the application) leads to empirical improvement on three tasks - convex hull finding, k-means clustering and on TSP. However, while results on convex hull task are good, k-means ones use a single, artificial problem (and do not test DCN, but rather a part of it), and on TSP DCN performs significantly worse than baselines in-distribution, and is better when tested on bigger problems than it is trained on. However the generalisation scores themselves are pretty bad thus it is not clear if this can be called a success story.\n\nI will be happy to revisit the rating if the experimental section is enriched.\n\nPros:\n- very easy to follow idea and model\n- simple merge or RL and SL in an end-to-end trainable model\n- improvements over previous solutions\n\nCons:\n- K-means experiments should not be run on artificial dataset, there are plenty of benchmarking datasets out there. In current form it is just a proof of concept experiment rather than evaluation (+ if is only for splitting, not for the entire architecture proposed). It would be also beneficial to see the score normalised by the cost found by k-means itself (say using Lloyd's method), as otherwise numbers are impossible to interpret. With normalisation, claiming that it finds 20% worse solution than k-means is indeed meaningful. \n- TSP experiments show that \"in distribution\" DCN perform worse than baselines, and when generalising to bigger problems they fail more gracefully, however the accuracies on higher problem are pretty bad, thus it is not clear if they are significant enough to claim success. Maybe TSP is not the best application of this kind of approach (as authors state in the paper - it is not clear how merging would be applied in the first place). \n- in general - experimental section should be extended, as currently the only convincing success story lies in convex hull experiments\n\nSide notes:\n- DCN is already quite commonly used abbreviation for \"Deep Classifier Network\" as well as \"Dynamic Capacity Network\", thus might be a good idea to find different name.\n- please fix \\cite calls to \\citep, when authors name is not used as part of the sentence, for example:\nGraph Neural Network Nowak et al. (2017) \nshould be\nGraph Neural Network (Nowak et al. (2017))\n\n# After the update\n\nEvaluation section has been updated threefold:\n- TSP experiments are now in the appendix rather than main part of the paper\n- k-means experiments are Lloyd-score normalised and involve one Cifar10 clustering\n- Knapsack problem has been added\n\nPaper significantly benefited from these changes, however experimental section is still based purely on toy datasets (clustering cifar10 patches is the least toy problem, but if one claims that proposed method is a good clusterer one would have to beat actual clustering techniques to show that), and in both cases simple problem-specific baseline (Lloyd for k-means, greedy knapsack solver) beats proposed method. I can see the benefit of trainable approach here, the fact that one could in principle move towards other objectives, where deriving Lloyd alternative might be hard; however current version of the paper still does not show that.\n\nI increased rating for the paper, however in order to put the \"clear accept\" mark I would expect to see at least one problem where proposed method beats all basic baselines (thus it has to either be the problem where we do not have simple algorithms for it, and then beating ML baseline is fine; or a problem where one can beat the typical heuristic approaches).\n\n", "This paper studies problems that can be solved using a dynamic programming approach and proposes a neural network architecture called Divide and Conquer Networks (DCN) to solve such problems. The network has two components: one component learns to split the problem and the other learns to combine solutions to sub-problems. Using this setup, the authors are able to beat sequence to sequence baselines on problems that are amenable to such an approach. In particular the authors test their approach on computing convex hulls, computing a minimum cost k-means clustering, and the Euclidean Traveling Salesman Problem (TSP) problem. In all three cases, the proposed solution outperforms the baselines on larger problem instances. ", "Summary of paper:\n\nThe paper proposes a unique network architecture that can learn divide-and-conquer strategies to solve algorithmic tasks.\n\nReview:\n\nThe paper is clearly written. It is sometimes difficult to communicate ideas in this area, so I appreciate the author's effort in choosing good notation. Using an architecture to learn how to split the input, find solutions, then merge these is novel. Previous work in using recursion to solve problems (Cai 2017) used explicit supervision to learn how to split and recurse. The ideas and formalism of the merge and partition operations are valuable contributions. \n\nThe experimental side of the paper is less strong. There are good results on the convex hull problem, which is promising. There should also be a comparison to a k-means solver in the k-means section as an additional baseline. I'm also not sure TSP is an appropriate problem to demonstrate the method's effectiveness. Perhaps another problem that has an explicit divide and conquer strategy could be used instead. It would also be nice to observe failure cases of the model. This could be done by visually showing the partition constructed or seeing how the model learned to merge solutions.\n\nThis is a relatively new area to tackle, so while the experiments section could be strengthened, I think the ideas present in the paper are important and worth publishing.\n\nQuestions:\n\n1. What is \\rho on page 4? I assume it is some nonlinearity, but this was not specified.\n2. On page 5, it says the merge block takes as input two sequences. I thought the merge block was defined on sets? \n\nTypos:\n1. Author's names should be enclosed in parentheses unless part of the sentence.\n2. I believe \"then\" should be removed in the sentence \"...scale invariance, then exploiting...\" on page 2.", "First of all, we thank the three reviewers for their insightful comments on our work.\n\nWe have updated the paper. The main changes are:\n- Changed the abbreviation from DCN to DiCoNet to avoid conflicts.\n- Changed k-means split block from set2set to GNN.\n- Compared k-means to Lloyd's.\n- Added non-synthetic dataset for k-means: Patches of CIFAR-10 images.\n- Added KnapSack problem.\n- Moved TSP to appendix.\n\nAnonReviewer4:\n\nComment1: There should also be a comparison to a k-means solver in the k-means section as an additional baseline.\n\nAns1: We agree with this comment on the k-means experimental section. We have updated the k-means\nsection in the following way:\n 1 - We have changed the split block into a GNN to gain in expressivity (both for the DiCoNet and\n Baseline). As explained in the text, the graph is created using a Gaussian kernel.\n 2 - We compare its performance with Lloyd's algorithm and Recursive Lloyd's (i.e, solving\n binary clustering recursively with Lloyd's algorithm). The performance results are shown as a ratio\n between the model costs after convergence and the algorithms output cost.\n 3 - We have used a non-synthetic dataset. We have taken 3x3x3 patches of images of the CIFAR-10 dataset and applied \n the clustering models/algorithms for a pre-specified dyadic number of intervals.\n\nComment2: I'm also not sure TSP is an appropriate problem to demonstrate the method's effectiveness. Perhaps another problem that has an explicit divide and conquer strategy could be used instead.\n\nAns2: We have moved the TSP problem to the appendix and introduced the Knapsack problem, which was also\ntackled in (Irwan Bello, Hieu Pham et al. '17). This problem has a clear recursive \nstructure, and we reaffirm this with the DiCoNet performance in the experiments.\n\nComment3: It would also be nice to observe failure cases of the model.\n\nAns3: Actually, DiCoNet performance on TSP is not that good compared to other problems due to the low\nlevel of scale invariance compared to them.\n\nComment4: What is \\rho on page 4? I assume it is some nonlinearity, but this was not specified.\n\nAns4: You are right. \\rho is a pointwise non-lineariy. In particular, \\rho is a sigmoid for the set2set model (split block of the convex hull), and a ReLu for the GNN.\n\nComment 5: On page 5, it says the merge block takes as input two sequences. I thought the merge block was defined on sets? \n\nAns5: The goal of the split block is to find a partition over sets (or structured sets as graphs).\nThe merge block takes into account the order of the previously solved instances. For instance, \nin mergesort (when it merges two already ordered sequences), or the convex hull (where the previously solved instances are sequences of points ordered clockwise or counter-clockwise).\n\nComment6: Author's names should be enclosed in parentheses unless part of the sentence.\n\nAns6: Solved.\n\nComment7: I believe \"then\" should be removed in the sentence \"...scale invariance, then exploiting...\" on page 2.\n\nAns7: You are right, solved.\n\nAnonReviewer2:\n\nComment8: K-means experiments should not be run on artificial dataset, there are plenty of benchmarking datasets out there. In current form it is just a proof of concept experiment rather than evaluation (+ if is only for splitting, not for the entire architecture proposed). It would be also beneficial to see the score normalised by the cost found by k-means itself (say using Lloyd's method), as otherwise numbers are impossible to interpret. With normalisation, claiming that it finds 20% worse solution than k-means is indeed meaningful. \n\nAns8: Same as Ans1.\n\nComment9: TSP experiments show that \"in distribution\" DCN perform worse than baselines, and when generalising to bigger problems they fail more gracefully, however the accuracies on higher problem are pretty bad, thus it is not clear if they are significant enough to claim success. Maybe TSP is not the best application of this kind of approach (as authors state in the paper - it is not clear how merging would be applied in the first place). \n\nAns9: We have moved the TSP section to the appendix.\n\nComment10: in general - experimental section should be extended, as currently the only convincing success story lies in convex hull experiments.\n\nAns10: We have introduced the KnapSack problem to the set of tasks and introduced an extra experiment on k-means with a non-synthetic dataset.\n\nComment11: DCN is already quite commonly used abbreviation for \"Deep Classifier Network\" as well as \"Dynamic Capacity Network\", thus might be a good idea to find different name.\n\nAns11: We have changed the abbreviation to DiCoNet.\n\nComment12: please fix \\cite calls to \\citep, when authors name is not used as part of the sentence, for example:\nGraph Neural Network Nowak et al. (2017) should be Graph Neural Network (Nowak et al. (2017))\n\nAns12: Solved.", "First of all, we thank the reviewer for the comments.\n\nIndeed, we agree with reviewer 2 that k-means experiments should include real datasets and comparisons with Lloyd.\nWe are currently working on updating the results for the k-means in order to illustrate its real performance compared to Lloyd's algorithm in a benchmarking dataset.\nWe are also planning to do a small update on the TSP to multiple scales.\n\n> Side notes:\nWe will consider updating the name of the paper in order to avoid conflicts with existing architectures.\n" ]
[ 6, 7, 7, -1, -1 ]
[ 3, 3, 3, -1, -1 ]
[ "iclr_2018_B1jscMbAW", "iclr_2018_B1jscMbAW", "iclr_2018_B1jscMbAW", "iclr_2018_B1jscMbAW", "H1wZwQwef" ]
iclr_2018_HyjC5yWCW
Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm
Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.
accepted-poster-papers
R3 summarizes the reasons for the decision on this paper: "The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a "significant contribution to the theoretical understanding of meta-learning," which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted."
train
[ "ByJP4Htez", "SyTFKLYgf", "S1CSbaKez", "Sy-YsJ27f", "S1qm_Qvzz", "HyUeuXDfM", "ryN8PmPGG", "SJ5ZD7DGM", "Hy-TUXPzM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author" ]
[ "This paper studies the capacity of the model-agnostic meta-learning (MAML) framework as a universal learning algorithm approximator. Since a (supervised) learning algorithm can be interpreted as a map from a dataset and an input to an output, the authors define a universal learning algorithm approximator to be a universal function approximator over the set of functions that map a set of data points and an input to an output. The authors show constructively that there exists a neural network architecture for which the model learned through MAML can approximate any learning algorithm. \n\nThe paper is for the most part clear, and the main result seems original and technically interesting. At the same time, it is not clear to me that this result is also practically significant. This is because the universal approximation result relies on a particular architecture that is not necessarily the design one would always use in MAML. This implies that MAML as typically used (including in the original paper by Finn et al, 2017a) is not necessarily a universal learning algorithm approximator, and this paper does not actually justify its empirical efficacy theoretically. For instance, the authors do not even use the architecture proposed in their proof in their experiments. This is in contrast to the classical universal function approximator results for feedforward neural networks, as a single hidden layer feedforward network is often among the family of architectures considered in the course of hyperparameter tuning. This distinction should be explicitly discussed in the paper. Moreover, the questions posed in the experimental results do not seem related to the theoretical result, which seems odd.\n\nSpecific comments and questions: \nPage 4: \"\\hat{f}(\\cdot; \\theta') approximates f_{\\text{target}}(x, y, x^*) up to arbitrary position\". There seems to be an abuse of notation here as the first expression is a function and the second expression is a value.\nPage 4: \"to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow...\". How does this differ from the classical UFA proofs? The relative technical merit of this paper would be more clear if this is properly discussed.\nPage 4: \"\\prod_{i=1}^N (W_i - \\alpha \\nabla_{W_i})\". There seems to be a typo here: \\nabla_{W_i} should be \\nabla_{W_i} L.\nPage 7: \"These error functions effectively lose information because simply looking at their gradient is insufficient to determine the label.\" It would be interesting the compare the efficacy of MAML on these error functions as compared to cross entropy and mean-squared error.\nPage 7: \"(1) can a learner trained with MAML further improve from additional gradient steps when learning new tasks at test time...? (2) does the inductive bias of gradient descent enable better few-shot learning performance on tasks outside of the training distribution...?\". These questions seem unrelated to the universal learning algorithm approximator result that constitutes the main part of the paper. If you're going to study these question empirically, why didn't you also try to investigate them theoretically (e.g. sample complexity and convergence of MAML)? A systematic and comprehensive analysis of these questions from both a theoretical and empirical perspective would have constituted a compelling paper on its own.\nPages 7-8: Experiments. What are the architectures and hyperparameters used in the experiments, and how sensitive are the meta-learning algorithms to their choice?\nPage 8: \"our experiments show that learning strategies acquired with MAML are more successful when faced with out-of-domain tasks compared to recurrent learners....we show that the representations acquired with MAML are highly resilient to overfitting\". I'm not sure that such general claims are justified based on the experimental results in this paper. Generalizing to out-of-domain tasks is heavily dependent on the specific level and type of drift between the old and new distributions. These properties aren't studied at all in this work. \n\n\nPOST AUTHOR REBUTTAL: After reading the response from the authors and seeing the updated draft, I have decided to upgrade my rating of the manuscript to a 6. The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a \"significant contribution to the theoretical understanding of meta-learning,\" which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted.\n", "The paper tries to address an interesting question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm. The authors provide answers, both theoretically and empirically.\n\nThe presentation could be further improved. For example, \n\n-the notation $\\mathcal{L}$ is inconsistent. It has different inputs at each location.\n-the bottom of page 5, \"we then define\"?\n-I couldn't understand the sentence \"can approximate any continuous function of (x,y,x^*) on compact subsets of R^{dim(y)}\" in Lemma 4.1\". \n-before Equation (1), \"where we will disregard the last term..\" should be further clarified.\n-the paragraph before Section 4. \"The first goal of this paper is to show that f_{MAML} is a universal function approximation of (D_{\\mathcal{T}},x^*)\"? A function can only approximate the same type function.", "The paper provides proof that gradient-based meta-learners (e.g. MAML) are \"universal leaning algorithm approximators\".\n\nPro:\n- Generally well-written with a clear (theoretical) goal\n- If the K-shot proof is correct*, the paper constitutes a significant contribution to the theoretical understanding of meta-learning.\n- Timely and relevant to a large portion of the ICLR community (assuming the proofs are correct)\n\nCon:\n- The theoretical and empirical parts seem quite disconnected. The theoretical results are not applied nor demonstrated in the empirical section and only functions as an underlying premise. I wonder if a purely theoretical contribution would be preferable (or with even fewer empirical results).\n\n* It has not yet been possible for me to check all the technical details and proofs.\n", "I want to thank the authors for preparing the paper.\nThe paper clearly shows that model-agnostic meta-learning (MAML) can approximate any learning algorithm.\nThis was not obvious to me before.\n\nI have now more confidence to apply MAML on many new tasks.", "> “I'm not sure that such general claims are justified based on the experimental results in this paper. Generalizing to out-of-domain tasks is heavily dependent on the specific level and type of drift between the old and new distributions. These properties aren't studied at all in this work.” \nWe modified the first-mentioned claim to be more precise. We agree that out-of-domain generalization is heavily dependent on both the task and the form of drift. Thus, we aimed to study many different levels and types of drift, studying four different types of drift (shear, scale, amplitude, phase) and several levels/amounts of each of these types of drift, within two different problem domains (Omniglot, sinusoid regression). In every single type and level of drift that we experimented with, we observed the same result -- that gradient-descent generalized better than recurrent networks. \nWith regard to the second claim on resilience to overfitting, this claim is in the context of the experiments with additional gradient steps and is not referring to out-of-domain tasks. The claim is supported by the results in our experiments.", "Thank you for the constructive feedback. All of the concerns raised in the review have been addressed in the revised version of the paper.\n\nPlease see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\n\n> “...This is because the universal approximation result relies on a particular architecture that is not necessarily the design one would always use in MAML. ... For instance, the authors do not even use the architecture proposed in their proof in their experiments...”\nAs mentioned above, we would like to clarify that the result holds for a generic deep network with ReLU nonlinearities that is used in prior papers that use MAML [Finn et al. ‘17ab, Reed et al. ‘17] and in the experiments in Section 7 of this paper. We revised Section 4 and Appendix D of the paper to make this more clear and explicitly show how this is the case.\n\n> “Page 4: \"\\hat{f}(\\cdot; \\theta') approximates f_{\\text{target}}(x, y, x^*) up to arbitrary position\". There seems to be an abuse of notation here as the first expression is a function and the second expression is a value.”\n> “Page 4: \"\\prod_{i=1}^N (W_i - \\alpha \\nabla_{W_i})\". There seems to be a typo here: \\nabla_{W_i} should be \\nabla_{W_i} L.”\nThank you for catching these two typos. We fixed both.\n\n> Page 4: \"to show universality, we will construct a setting of the weight matrices that enables independent control of the information flow...\". How does this differ from the classical UFA proofs? The relative technical merit of this paper would be more clear if this is properly discussed.\nWe added text in the latter part of section 3 to clarify the relationship to the UFA theorem: “It is clear how $f_\\text{MAML}$ can approximate any function on $x^\\star$, as per the UFA theorem; however, it is not obvious if $f_\\text{MAML}$ can represent any function of the set of input, output pairs in $\\dataset_\\task$, since the UFA theorem does not consider the gradient operator.”\nOur proof uses the UFA proof as a subroutine, and is otherwise completely distinct.\n\n> “These questions seem unrelated to the universal learning algorithm approximator result that constitutes the main part of the paper. If you're going to study these question empirically, why didn't you also try to investigate them theoretically (e.g. sample complexity and convergence of MAML)? A systematic and comprehensive analysis of these questions from both a theoretical and empirical perspective would have constituted a compelling paper on its own.”\nYes, these two questions would be very interesting to analyze theoretically. We leave such theoretical questions to future work. With regard to the connection between these experiments and the theory, please see our comment above to all of the reviewers -- we added another experiment in Section 7.2 which directly follows up on the theory, studying the depth necessary to meta-learn a distribution of tasks compared to the depth needed for standard learning. We also added more discussion connecting the theory and the existing experiments.\n\n> “What are the architectures and hyperparameters used in the experiments, and how sensitive are the meta-learning algorithms to their choice?”\nWe outlined most of the experimental details in the main text and in the Appendix. We added some additional details that we had missed, in Sections 7.1 and Appendix G.\nOmniglot:\nWe use a standardized convolutional encoder architecture in the Omniglot domain (4 conv layers each with 64 3x3 filters, stride 2, ReLUs, and batch norm, followed by a linear layer). All methods used the Adam optimizer with default hyperparameters. Other hyperparameter choices were specific to the algorithm and can be found in the respective papers.\nSinusoid:\nWith MAML, we used a simple fully-connected network with 2 hidden layers of width 100 and ReLU nonlinearities, and the suggested hyperparameters in the MAML codebase (Adam optimizer, alpha=0.001, 5 gradient steps). On the sinusoid task with TCML, we used an architecture of 2x{ 4 dilated convolution layers with 16 channels, 2x1 kernels, and dilation size of 1,2,4,8 respectively; then an attention block with key/value dimensionality of 8} followed by a 1x1 conv. TCML used the Adam optimizer with default hyperparameters.\nWe have not found any of the algorithms to be particularly sensitive to the architecture or hyperparameters. The hyperparameters provided in each paper’s codebases worked well.", "Please see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\n\n>\"the notation $\\mathcal{L}$ is inconsistent. It has different inputs at each location\"\nThank you for pointing this out. We have modified the paper in Sections 2.2, 3, and 4 to use two different symbols and use each of these symbols in a consistent manner.\n\n>\"-the bottom of page 5, \"we then define\"?\"\nThe lemma previously appeared on the following page, after “we then define”. Now, it appears on the same page.\n\n> \"I couldn't understand the sentence \"can approximate any continuous function of (x,y,x^*) on compact subsets of R^{dim(y)}\" in Lemma 4.1\". \"\nWe added a footnote to clarify that this assumption is inherited from the UFA theorem.\n\n> the paragraph before Section 4. \"The first goal of this paper is to show that f_{MAML} is a universal function approximation of (D_{\\mathcal{T}},x^*)\"? A function can only approximate the same type function.\nWe modified to text at the end of Section 3 to make it clear that f_{MAML} is the same type of function.", "Please see our main response in a comment above that addresses the primary concerns among all reviewers. We reply to your specific comments here.\n\n> “The theoretical and empirical parts seem quite disconnected.”\nAs mentioned in our main response above, we added a new experiment in Section 7.2 that connects to the theory. The theory suggests that depth is important for an expressive meta-learner compared to standard neural network learner, for which a single hidden layer should theoretically suffice. The results in our new experimental analysis support our theoretical finding that more depth is needed for MAML than for representing individual tasks. We also added additional discussion to clarify and motivate the existing experiments of inductive bias.", "We thank the reviewers for their constructive feedback!\n\nWe would first like to clarify that the main theoretical result holds for a generic deep network with ReLU nonlinearities, an architecture which is standard in practice. We have revised Section 4 and Appendix D in the paper to clarify and explicitly show this. As mentioned by R1, this theoretical result is a “significant contribution to the theoretical understanding of meta-learning”.\n\nSecond, to address the reviewers concerns about a disconnect between the theory and experiments, we did two things:\n1) We added a new experiment in Section 7.2 that directly follows up on the theoretical result, empirically comparing the depth required for meta-learning to the depth required for representing the individual tasks being meta-learned. The empirical results in this section support the theoretical result.\n2) We clarified in Section 7 the importance of the existing experiments, which is as follows: the theory shows that MAML is just as expressive as black-box (e.g. RNN-based) meta-learners, but this does not, by itself, indicate why we might prefer one method over the other and in which cases we should prefer one over the other. The experiments illustrate how MAML can improve over black-box meta-learners when extrapolating to out-of-distribution tasks.\n\nWe respond to individual comments in direct replies to the reviewers comments. Given the low confidence scores, we hope that the reviewers will follow up on our response and adjust their reviews based on our response if things have become more clear." ]
[ 6, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 1, 1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyjC5yWCW", "iclr_2018_HyjC5yWCW", "iclr_2018_HyjC5yWCW", "iclr_2018_HyjC5yWCW", "HyUeuXDfM", "ByJP4Htez", "SyTFKLYgf", "S1CSbaKez", "iclr_2018_HyjC5yWCW" ]
iclr_2018_S1ANxQW0b
Maximum a Posteriori Policy Optimisation
We introduce a new algorithm for reinforcement learning called Maximum a-posteriori Policy Optimisation (MPO) based on coordinate ascent on a relative-entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.
accepted-poster-papers
The main idea of policy-as-inference is not new, but it seems to be the first application of this idea to deep RL, and is somewhat well motivated. The computational details get a bit hairy, but the good experimental results and the inclusion of ablation studies pushes this above the bar.
test
[ "HkHiimLEG", "rypb6tngM", "H1y3N2alf", "Hy4_ANE-f", "HyixP8aQz", "S1ymUU6Xz", "HJEuS8amM", "HkJ6aF2xf", "S1DYhi9gz", "By6E3iqxz", "HkX4eXIC-", "Hkk3DISC-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "public", "public" ]
[ "We have updated the paper to address the concerns raised by the reviewers.\nIn particular we have included:\n - A detailed theoretical analysis of the MPO framework\n - An updated methods section that has a simpler derivation of the algorithm", "The paper presents a new algorithm for inference-based reinforcement learning for deep RL. The algorithm decomposes the policy update in two steps, an E and an M-step. In the E-step, the algorithm estimates a variational distribution q which is subsequentially used for the M-step to obtain a new policy. Two versions of the algorithm are presented, using a parametric or a non-parametric (sample-based) distribution for q. The algorithm is used in combination with the retrace algorithm to estimate the q-function, which is also needed in the policy update.\n\nThis is a well written paper presenting an interesting algorithm. The algorithm is similar to other inference-based RL algorithm, but is the first application of inference based RL to deep reinforcement learning. The results look very promising and define a new state of the art or deep reinforcement learning in continuous control, which is a very active topic right now. Hence, I think the paper should be accepted. \n\n\nI do have a few comments / corrections / questions about the paper:\n\n- There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\n\n- It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined. \n\n- I did not get whether the additional \"one-step KL regularisation\" is obtained from the lower bound or just added as additional regularisation? Could you explain?\n\n- The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?\n\n- I think the following experiments would be very informative:\n\n - MPO without trust region in M-step\n \n - MPO without retrace algorithm for getting the Q-value\n\n - test different epsilons for E and M step\n\n\n", "This is an interesting policy-as-inference approach, presented in a reasonably clear and well-motivated way. I have a couple questions which somewhat echo questions of other commenters here. Unfortunately, I am not sufficiently familiar with the relevant recent policy learning literature to judge novelty. However, as best I am aware the empirical results presented here seem quite impressive for off-policy learning.\n\n- When is it possible to normalize the non-parametric q(a|s) in equation (6)? It seems to me this will be challenging in most any situation where the action space is continuous. Is this guaranteed to be Gaussian? If so, I don’t understand why.\n\n– In equations (5) and (10), a KL divergence regularizer is replaced by a “hard” constraint. However, for optimization purposes, in C.3 the hard constraint is then replaced by a soft constraint (with Lagrange multipliers), which depend on values of epsilon. Are these values of epsilon easy to pick in practice? If so, why are they easier to pick than e.g. the lambda value in eq (10)?\n\n", "This paper studies new off-policy policy optimization algorithm using relative entropy objective and use EM algorithm to solve it. The general idea is not new, aka, formulating the MDP problem as a probabilistic inference problem. \n\nThere are some technical questions:\n1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; However, for nonparametric EM case, there is no guarantee for that. This is the biggest concern I have for the theoretical justification of the paper.\n\n2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. This is not true. The Retrace algorithm, is per se, a value iteration algorithm. I think the author could say using the policy evaluation version of Retrace, or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.\n\nBesides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as \n“Convergent Tree-Backup and Retrace with Function Approximation”. But this is a minor point if the author doesn’t emphasize too much about off-policy stability.\n\n3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. Usually, an in-depth analysis between the choice of \\lambda in multiplier formulation and the \\epsilon in the constraint should be discussed, which is necessary for further theoretical analysis. \n\n4. The experimental conclusions are conducted without sound evidence. For example, the author claims the method to be 'highly data efficient' compared with existing approaches, however, there is no strong evidence supporting this claim. \n\n\nOverall, although the motivation of this paper is interesting, I think there is still a lot of details to improve. ", "We thank you for your questions and insightful comments. \n\n> There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-\nparametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. \n> These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\n\nThank you for pointing out the additional related work. We will include it in the paper. Regarding the EM vs. trust-region question: The benefit of deriving the algorithm from the perspective of an EM-like coordinate ascent is that it motivates and provides a convenient means for theoretical analysis of the two-step procedure used in our approach. See the added a theoretical analysis that was added to the appendix of the paper.\n\n> It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. \n> First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. \n> This means that we change the objective all the time which is theoretically a bit weird. \n> Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. \n> Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. \n> Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions\n> just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined. \n\nThe reviewers point is well taken. While we think the unconstrained (soft-regularized) is instructive and useful for theoretical analysis the hard-constrained version can indeed be understood as proposed by the reviewer and equally provides important insights. We will clarify this in the paper and also include an experimental comparison between the soft and hard-regularized cases.\nRegarding your two concerns: For our theoretical guarantee (that we have now derived in the appendix) to hold we have to fix alpha. However, in practice it changes slowly during optimization and converges to a stable value. One can indeed think of the second trust-region as a simple regularizer that prevents overfitting/too large changes in the (sample-based) M-step (similar small changes in the policy are also required by our proof).\n\n- Regarding the additional experiments you asked for:\n\nWe agree and have carried out additional experiments that will be included in the final version, preliminary results are as follows:\n\n1) MPO without trust region in M-step:\nAlso works well for low-dimensional problems but is less robust for high-dimensional problems such as the humanoid.\n\n2) MPO without retrace algorithm for getting the Q-value\nIs significantly slower to reach the same level of performance in the majority of the control suite tasks (retrace + MPO is never worse in any of the control suite tasks).\n\n3) test different epsilons for E and M step\nThe algorithm seems to be robust to settings of epsilon - as long as it is set roughly to the right order of magnitude (10^-3 to 10^-2 for the E-step, 10^-4 to 10^-1 for the M-step). A very small epsilon will, of course, slow down convergence.", "We thank the reviewer for comments and thoughtful questions. We reply to your main concerns in turn below.\n\n> When is it possible to normalize the non-parametric q(a|s) in equation (6)? It seems to me this will be challenging in most any situation where the action space is continuous. \n> Is this guaranteed to be Gaussian? If so, I don’t understand why.\n\nPlease see appendix, section C.2. In the parametric case the solution for q(a|s) is trivially normalized when we impose a parametric form that allows analytic evaluation of the normalization function (such as a Gaussian distribution). . \nFor the non-parametric case note that the normalizer is given by \nZ(s) = \\int \\pi_old(a|s) exp( Q(s,a)/eta) da,\ni.e. it is an expectation with respect to our old policy for which we can obtain a MC estimate: \\hat{Z}(s) = 1/N \\sum_i exp(Q(s,a_i)/eta) with a_i \\sim \\pi_old( \\cdot | s).\nThus we can empirically normalize the density for those state-action samples that we use to estimate pi_new in the M-step.\n\n> In equations (5) and (10), a KL divergence regularizer is replaced by a “hard” constraint. \n> However, for optimization purposes, in C.3 the hard constraint is then replaced by a soft constraint (with Lagrange multipliers), which depend on values of epsilon. \n> Are these values of epsilon easy to pick in practice? If so, why are they easier to pick than e.g. the lambda value in eq (10)?\n\nThank you for pointing out that the reasoning behind this was not entirely easy to follow. We will improve the presentation in the paper. Indeed we found that choosing epsilon can be easier than choosing a multiplier for the KL regularizer. This is due to the fact that the scale of the rewards is unknown a-priori and hence the multiplier that trades of maximizing expected reward and minimizing KL can be expected to change for different RL environments. In contrast to this, when we put a hard constraint on the KL we can explicitly force the policy to stay \"epsilon-close\" to the last solution - independent of the reward scale. This allows for an easier transfer of hyperparameters across tasks.", "We appreciate the detailed comments and questions regarding the connection between our method and EM methods. We have addressed your main concern with an additional theoretical analysis of the algorithm, strengthening the paper.\n\n> 1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; However, for nonparametric \n> EM case, there is no guarantee for that. This is the biggest concern I have for the theoretical justification of the paper.\n\nWe have derived a proof that gives a monotonic improvement guarantee for the nonparametric variant of the algorithm under certain circumstances. We will include this proof in the paper. To summarize: Assuming Q can be represented and estimated, the \"partial\" E-step in combination with an appropriate gradient-based M-step leads to an improvement of the KL regularized objective and guarantees monotonic improvement of the overall procedure under certain circumstances. See also our response to the Anonymous question below.\n\n> 2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. This is not true. \n> The Retrace algorithm, is per se, a value iteration algorithm. I think the author could say using the policy evaluation version of Retrace, \n> or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.\n\nWe will clarify that we are using the Retrace operator for policy evaluation only (This use case was indeed also analyzed in Munos et al. (2016)).\n\n> Besides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as \n> “Convergent Tree-Backup and Retrace with Function Approximation”. But this is a minor point if the author doesn’t emphasize too much about off-policy stability.\n\nWe agree that off-policy stability with function approximation is an important open problem that deserves additional attention but not one specific to this method (i.e. any existing DeepRL algorithm shares these concerns). We will add a short note.\n\n> 3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. \n> Usually, an in-depth analysis between the choice of \\lambda in multiplier formulation and the \\epsilon in the constraint should be discussed, which is necessary for further theoretical analysis. \n\nWe now have a detailed analysis of the unconstrained multiplier formulation (see comment above) of our algorithm. In practice we found that implementing updates according to both hard-constraints and using a fixed regularizer worked well for individual domains. Both \\lambda and \\epsilon can be found via a small hyperparameter search in this case. When applying the algorithm to many different domains (with widely different reward scales) with the same set of hyperparameters we found it easier to use the hard-constrained version; which is why we placed a focus on it. We will include these experimental results in an updated version of the paper. We believe these observations are in-line with research on hard-constrained/KL-regularized on-policy learning algorithms such as PPO/TRPO (for which explicit connections between the two settings are also ). \n\n> 4. The experimental conclusions are conducted without sound evidence. For example, the author claims the method to be 'highly data efficient' compared with existing approaches, however, there is no strong evidence supporting this claim. \n\nWe believe that the large set of experiments we conducted in the experimental section gives evidence for this. Figure 4 e.g. clearly shows the improved data-efficiency MPO gives over our implementations of state-of-the-art RL algorithms for both on-policy (PPO) and off-policy learning (DDPG, policy gradient + Retrace). Further, when looking at the results for the parkour domain we observe an order of magnitude improvement over the reference experiment. We have started additional experiments for parkour with a full humanoid body - leading to similar speedups over PPO - which will be included in the final version and further solidify the claim on a more difficult benchmark.", "\nI do have a few comments / corrections / questions about the paper:\n\n- There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?\n\n- It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined. \n\n- I did not get whether the additional \"one-step KL regularisation\" is obtained from the lower bound or just added as additional regularisation? Could you explain?\n\n- The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?\n\n- I think the following experiments would be very informative:\n\n - MPO without trust region in M-step\n \n - MPO without retrace algorithm for getting the Q-value\n\n - test different epsilons for E and M step\n\n", "Thank you for carefully reading of the paper and uncovering a few minor mistakes.\n\n> Firstly, I think it would be helpfull to formally define what $$q(\\rho)$$ is. My current assumption is: $$q(\\rho) = p(s_0) \\prod_1^\\infty p(s_{t+1}|a_t, s_t) q(a_t|s_t)$$.\nYour assumption is correct. q(\\rho) is analogous to p(\\rho) (as described in the background section on MDPs). We will add this definition. \n\n>1. I think at the end of the line you should have $$+ \\log p(\\theta)$$ rather than $$+ p(\\theta)$$ (I believe this is a typo)\nCorrect, this is indeed a typo and will be fixed in the next revision of the paper.\n\n> 2. In the definition of the log-probabilities, the $$\\alpha$$ parameter appears only in the definition of 'p(O=1|\\rho)'. The way it appears is as a denominator in the log-probability. In line 4 of equation (1) it has suddenly appeared as a multiplier in front of the log-densities of $$\\pi(a|s_t)$$ and $$q(a|s_t)$$. This is possible if we factor out the $$\\alpha^{-1}$$ from the sum of the rewards, but then on that line, there should be a prefactor of $$\\alpha^{-1}$$ in front of the expectation over 'q' which seems missing. (I believe this is a typo as well).\n\nIn this step we indeed just multiplied with the (non-zero) \\alpha. We presume you meant that alpha is then, however, missing in front of the prior p(\\theta) here. You are correct and this will be also fixed in the next revision.\n\n> 3. In the resulting expectation, it is a bit unclear how did the discount factors $$\\gamma^t$$ have appeared as well as in front of the rewards also in front of the KL divergences? From the context provided I really failed to be able to account for this, and given that for the rest of the paper this form has been used more than once I was wondering if you could provide some clarification on the derivation of the equation as it is not obvious to at least some of the common readers of the paper.\n\nThank you for pointing out this inconsistency which has arisen due to some last minute changes in notation that we introduced when we unified the notation in the paper - switching from presenting the finite-horizon, undiscounted, setting to using the infinite-horizon formulation. As pointed out by previous work (e.g. Rawlik et al.) there is a direct correspondence between learning / inference in an appropriately constructed graphical model (as suggested by the first line of Eq. 1) and the regularized control objective in the finite horizon, undiscounted case. The regularized RL objective still exists in the discounted, infinite horizon case (e.g. Rawlik et al. or see [1] for another construction), but an equivalent graphical model is harder to construct (and is not of the form currently presented in the paper; e.g. see [1]). We will fix this and clarify the relation in the revision\n\n[1] Probabilistic Inference for Solving Discrete and Continuous State Markov Decision Processes, Marc Toussaint, Amos Storkey, ICML 2004", "Thank you for your thorough read of the paper. \n\n> The derivation of \"one-step KL regularised objective\" is unclear to me and this seems to be related to a partial E-step. \n\nWe will clarify the relationship between the one-step objective and Eq. 1 in more detail in a revised version of the paper. We will also include a proof that the the specific \"partial\" update we use in the E-step leads to an improvement in Eq. (1) and guarantees monotonic improvement of the overall procedure.\n\nIn short, the relation between objective (1) and formula (4) is as follows:\ninstead of optimizing objective (1) directly in the E-step (which would entail running soft-Q-learning to convergence - e.g. Q-learning with additional KL terms of subsequent time-steps in a trajectory added to the rewards) we start from the \"unregularized\" Q-function (Eq. (3)) and expand it via the \"regularized\" Bellman operator T Q(s,a) = E_a[Q(s,a)] + \\alpha KL(q || \\pi). We thus only consider the KL at a given state s in the E-step and not the \"full\" objective from (1). Nonetheless, as mentioned above we have now prepared a proof that this still leads to an improvement in (1).\n\n> (2) As far as I know, the previous works on variational RL maximize the marginal log-likelihood p(O=1|\\theta) (Toussaint (2009) and Rawlik (2012)), whereas you maximizes the unnormalized posterior p(O=1, \\theta) with the prior assumption on $\\theta$. I wonder if the prior assumption enhances the performance. \n\nCorrect. The prior p(\\theta) allows us to add regularization to the M-step of our procedure (enforcing a trust-region on the policy). We found this to be important when dealing with hihg-dimensional systems like the humanoid where the M-step could otherwise overfit (as the integral over action is only evaluated using 30 samples in our experiments).\n", "(1) Clarification of Equation 4\n\nThe derivation of \"one-step KL regularised objective\" is unclear to me and this seems to be related to a partial E-step. \n\nWould you explain this part in more detail?\n\n(2) As far as I know, the previous works on variational RL maximize the marginal log-likelihood p(O=1|\\theta) (Toussaint (2009) and Rawlik (2012)), whereas you maximizes the unnormalized posterior p(O=1, \\theta) with the prior assumption on $\\theta$. \nI wonder if the prior assumption enhances the performance. ", "These might be very obvious questions, but I failed to derive the last line (line 4) in equation (1) in the paper. \n\nFirstly, I think it would be helpfull to formally define what $$q(\\rho)$$ is. My current assumption is:\n$$q(\\rho) = p(s_0) \\prod_1^\\infty p(s_{t+1}|a_t, s_t) q(a_t|s_t)$$\nwhere the 'p' distributions are taken to be equal to the real environmental state transitions.\n\nNow, there are a few problems that I encountered when trying to derive equation (1):\n\n1. I think at the end of the line you should have $$+ \\log p(\\theta)$$ rather than $$+ p(\\theta)$$ (I believe this is a typo)\n\n2. In the definition of the log-probabilities, the $$\\alpha$$ parameter appears only in the definition of 'p(O=1|\\rho)'. The way it appears is as a denominator in the log-probability. In line 4 of equation (1) it has suddenly appeared as a multiplier in front of the log-densities of $$\\pi(a|s_t)$$ and $$q(a|s_t)$$. This is possible if we factor out the $$\\alpha^{-1}$$ from the sum of the rewards, but then on that line, there should be a prefactor of $$\\alpha^{-1}$$ in front of the expectation over 'q' which seems missing. (I believe this is a typo as well).\n\n3. In the resulting expectation, it is a bit unclear how did the discount factors $$\\gamma^t$$ have appeared as well as in front of the rewards also in front of the KL divergences? From the context provided I really failed to be able to account for this, and given that for the rest of the paper this form has been used more than once I was wondering if you could provide some clarification on the derivation of the equation as it is not obvious to at least some of the common readers of the paper." ]
[ -1, 7, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1ANxQW0b", "iclr_2018_S1ANxQW0b", "iclr_2018_S1ANxQW0b", "iclr_2018_S1ANxQW0b", "rypb6tngM", "H1y3N2alf", "Hy4_ANE-f", "iclr_2018_S1ANxQW0b", "Hkk3DISC-", "HkX4eXIC-", "iclr_2018_S1ANxQW0b", "iclr_2018_S1ANxQW0b" ]
iclr_2018_SyX0IeWAW
META LEARNING SHARED HIERARCHIES
We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps. Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.
accepted-poster-papers
This paper presents a fairly straightforward algorithm for learning a set of sub-controllers that can be re-used between tasks. The development of these concepts in a relatively clear way is a nice contribution. However, the real problem is how niche the setup is. However, it's over the bar in general.
test
[ "HyJuHTteM", "BJN4gTtlM", "r1RR1Vclf", "rJMH0whmz", "ByaUAxmMz", "HkdQtbZfM", "S1BykK1Mz", "Hy28ObpbM", "ByaWalTbf", "BJBohe6bz", "r1nH2xp-f", "rJG0slabf", "S1YphBy-f", "ByaasHJWM", "rkCZR_3xz", "r13qQpKxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "Please see my detailed comments in the \"official comment\"\n\nThe extensive revisions addressed most of my concerns\n\nQuality\n======\nThe idea is interesting, the theory is hand-wavy at best (ADDRESSED but still a bit vague), the experiments show that it works but don't evaluate many interesting/relevant aspects (ADDRESSED). It is also unclear how much tuning is involved (ADDRESSED).\n\nClarity\n=====\nThe paper reads OK. The general idea is clear but the algorithm is only provided in vague text form (and actually changing from sequential to asynchronous without any justification why this should work) (ADDRESSED) leaving many details up the the reader's best guess (ADDRESSED).\n\nOriginality\n=========\nThe idea looks original.\n\nSignificance\n==========\nIf it works as advertised this approach would mean a drastic speedup on previously unseen task from the same distribution.\n\nPros and Cons\n============\n+ interesting idea\n- we do everything asynchronously and in parallel and it magically works (ADDRESSED)\n- many open questions / missing details (ADDRESSED)", "This paper considers the reinforcement learning problem setup in which an agent must solve not one, but a set of tasks in some domain, in which the state space and action space are fixed. The authors consider the problem of learning a useful set of ‘sub policies’ that can be shared between tasks so as to jump start learning on new tasks drawn from the task distribution.\n\nI found the paper to be generally well written and the key ideas easy to understand on first pass. The authors should be commended for this. Aside from a few minor grammatical issues (e.g. missing articles here and there), the writing cannot be too strongly faulted.\n\nThe problem setup is of general interest to the community. Metalearning in the multitask setup seems to be gaining attention and is certainly a necessary step towards building rapidly adaptable agents.\n\nWhile the concepts were clearly introduced, I think the authors need to make, much more strongly, the case that the method is actually valuable. In that vein, I would have liked to see more work done on elucidating how this method works ‘under the hood’. For example, it is not at all clear how the number of sub policies affects performance (one would imagine that there is a clear trade off), nor how this number should be chosen. It seems obvious that this choice would also affect the subtle dynamics between holding the master policy constant while updating the sub policies and vice versa. While the authors briefly touch on some of these issues in the rationale section, I found these arguments largely unsubstantiated. Moreover, this leads to a number of unjustified hyper-parameters in the method which I suspect would affect the training catastrophically without significant fine-tuning.\n\nThere are also obvious avenues to be followed to check/bolster the intuitions behind the method. By way of example, my sense is that the procedure described in the paper uncovers a set of sub policies that form a `good’ cover for the task space - if so simply plotting out what they policies look like (or better yet how they adapt in time) would be very insightful (the rooms domain is perhaps a good candidate for this).\n\nWhile the key ideas are clearly articulated the practical value of the procedure is insufficiently motivated. The paper would benefit hugely from additional analysis.", "This paper proposes a novel hierarchical reinforcement learning method for a fairly particular setting. The setting is one where the agent must solve some task for many episodes in a sequence, after which the task will change and the process repeats. The proposed solution method splits the agent into two components, a master policy which is reset to random initial weights for each new task, and several sub-policies (motor primitives) that are selected between by the master policy every N steps and whose weights are not reset on task switches. The core idea is that the master policy is given a relatively easy learning task of selecting between useful motor primitives and this can be efficiently learned from scratch on each new task, whereas learning the motor primitives occurs slowly over many different tasks. To push this motivation into the learning process, the master policy is updated always but the sub-policies are only updated after an extended warmup period (called the joint-update or training period). This experiments include both small domains (moving to 2D goals and four-rooms) and more complex physics simulations (4-legged ants and humanoids). In both the simple and complex domains, the proposed method (MLSH) is able to robustly achieve good performance.\n\nThis approach to obtaining complex structured behavior appears impressive despite the amount of temporal structure that must be provided to the method (the choice of N, the warmup period, and the joint-update period). Relying on the temporal structure for the hierarchy, and forcing the master policy to be relearned from scratch for each new task may be problematic in general, but this work shows that in some complex settings, a simple temporal decomposition may be sufficient to encourage the development of reusable motor primitives and to also enable quick learning of meta-policies over these motor-primitives. Moreover, the results show that these temporal hierarchies are helpful in these domains, as the corresponding non-hierarchical methods failed on the more challenging tasks.\n\nThe paper could be improved in some places (e.g. unclear aliases of joint-update or training periods, describing how the parameters were chosen, and describing what kinds of sub-policies are learned in these domains when different parameter choices are made).\n", "In the latest paper revision we have added the following fixes/clarifications:\n\n- Added graphs showing a hyperparameter comparison on sub-policy count and warmup-duration for MovementBandits and Ant-Twowalk tasks.\n- Clarified the details of the multi-core training process in 6.1 \"Experimental Setup\".\n- Added reasoning behind the learning rates of theta and phi.\n- Added details and reasoning behind baseline comparisons for \"Sampled Task\" experiments.\n- Changed \"training period\" to \"joint-update period\" for consistency.\n\n", "Hey, the repo has been restructured so it should be easier to install correctly.", "Hi,\n\nInteresting paper and thanks for releasing the code! But can you please make it working and take case about some of the reported issues? Thanks!", "Point taken. We've run another set of hyperparameter experiments on the MovementBandit task. See graph (https://imgur.com/a/D7YMx), which we will add in the next paper revision. (Default is 2 subpolicies, warmup duration of 10).\n\nFor sure, the influence of the warmup ratio depends on the task. However, as a rule of thumb, a long warmup period (or large subpolicy count) will simply result in a longer training time, rather than a downgrade in final performance. \n\nIn the MovementBandit parameter comparison, performance is only drastically lowered if the warmup duration is close to zero or the subpolicy count is one. On the other hand, when the warmup duration is 40 or the subpolicy count is 4, the agent still converges to the optimal solution, albeit at a slightly slower pace.", "Thanks for the extensive replies and clarifications!\nIn the new plot it almost looks like the warmup does not have any influence at all. So it might actually be highly task dependent whether it is needed/helpful/detrimental or not...", "We've run experiments comparing the effects of various parameters (sub-policy count, warmup ratio). See (https://imgur.com/a/TLyQv), and our comment \"Analysis on Parameter Choice\" for more details.", "Hey, appreciate the feedback.\n\nTo address your concern about how performance depends on hyperparameters, we ran additional experiments comparing the effects of various parameter adjustments. See the graph (https://imgur.com/a/TLyQv), which we have added in Fig.9 on the current revision. (Default parameters are a sub-policy count of 2, and a warmup duration of 20). As long as a few minimums are met (at least 2 sub-policies), performance is not overly dependent on fine-tuned parameters. The parameters we describe in the paper can be seen as a “baseline minimum” of parameters to reach a strong solution on the various tasks.\n\nRegarding displaying the behavior of sub-policies, we show a decomposition of the three sub-policies discovered in the Maze task in Figure 6: moving up, right, and down. We display how the policies adapt over time in our supplemental videos, linked on the first page (https://sites.google.com/site/mlshsupplementals/, specifically https://www.youtube.com/watch?v=9nvjy9aJi50).", "Hey, thanks for the feedback. We’ve addressed some clarifications in the response to your official comment, titled \"Proposed Changes\". We hope that these ideas clear up misunderstandings, and fill in details that may have been explained unclearly.", "Thanks for the response. We’ll fix the typo of “training period”-> “joint-update period” in the next version. We’ll also clean up the intuition behind parameter choice (see our response “Analysis on Parameter Choice”). ", "Thanks for taking the time to review and give feedback. We’ve addressed the main points and proposed some changes in the next version to clear up explanations and reasoning.\n\n> Algorithm 1 makes it sound like...parallel?\n\nWhile the core algorithm can be run sequentially, we use a multi-core setup in experiments to speed up the process. There may be some confusion in the description -- after the group resets theta, a new task is sampled, and all cores learn on this same new task. Therefore at any given time we are optimizing for 10 tasks in parallel, but these tasks are constantly being re-sampled from the distribution. We will clarify this in the next revision.\n\n> Is the number of sup-policies pre-defined/hard-coded/hand-designed? What happens if you have too many/not enough?\n\nFor simplicity’s sake, we pre-define the number of sub-policies, treating it as a hyperparameter. In the Future Work section, we describe a potential method for condensing multiple sub-policies into a single network, allowing the agent to learn any distribution of sub-policies.\n\nWith a small number of sub-policies, the agent may be less robust to new tasks (as it learns fewer behaviors). With a large number of sub-policies, it takes longer to train agents.\n\n> The argumentation in Sect. 5 is vague...time.\n\nThe point about not updating phi too much is correct (we address it below). A key point in the Sect.5 argument is that sub-policies should only be trained in conjunction with a strong master policy, which is the rationale behind the warmup period.\n\n> I am not convinced that the above is solved by staggering the tasks in the asynchronous setting...\n\nIn practice, we use a small phi learning rate (0.0003) compared to the theta learning rate (0.01), as defined in the 6.1 Experimental Setup. Our goal here is that small changes in the representation (phi) are negligible in the short-run training of theta, but will build up in the long-run.\n\nWe’ll add in this reasoning behind the learning-rate choices in the next revision.\n\n> Another interesting experiment would be to test how much the system unlearns, e.g., by optimizing for a task, switching to a few other tasks, freezing phi and testing if the first task can still achieve the same performance\n> The plots Fig. 4/7 are a bit unclear. My guess is \"full training\" means learning from scratch as described in Sect. 6.1, \"sampled tasks\" means trying whether the learned sub-policies also work for a previously unseen task. Here again the question: What happens if you freeze phi? How well do phi updated on the new tasks work on the original ones? Related question: Why is there no plot on the combination task (Fig. 7) and full training on Four Rooms?\n\nYou’re correct on the meaning of “full training” and “sampled tasks”. We’ll add a description in the caption to clear things up. \n\nRegarding the freeze phi experiment: When running the “Sampled Task” experiments, only theta is trained, so phi is frozen. If phi was overfitting/unlearning on every new task, the agent would perform poorly on an unseen “Sampled Task”.\n\nThe plot for the combination task is uninteresting since the rewards are so different. The different trials (MLSH Transfer, Shared Policy Transfer, Single Policy) never pass each other in performance.\n\nOn four rooms, we don’t include a full training since the base methods compared (PPO and Actor Critic) have vastly different sample efficiencies. Instead, we just train until both baselines have reached convergence.\n\n> Sect. 6.4 \"series of tasks\" is a bit unclear\n\nThanks -- we’ll clarify this to “series of tasks involving robotic locomotion in the physics domain”\n\n> Sect. 6.4: Why is the ratio of warm-up and training so different compared to the 2D bandits? How much influence does this parameter have on the performance of the approach?\n\nThe physics domain has a more complicated learning task for sub-policies compared to the 2D task, so training is naturally slower. However, master policies have the same learning task in both situations (select a sub-policy). So we give more training updates per warmup in the physics task. While it’s important to have a warmup period (as shown in Fig 4, MLSH performs worse when not including a warmup), the ratio doesn’t need to be precise. It’s always more accurate to have a long warmup period and short training period, but the agent will take longer to train. We’ll add this intuition in the next revision.", "The idea of learning a hierarchy of sub-policies has been explored in past work, many of which we cite and discuss in Section 2:\n\nPierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. arXiv preprint arXiv:1609.05140, 2016.\n\nCarlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. In International Conference on Learning Representations, 2017.\n\nRichard S Sutton, Doina Precup, , and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. In Artificial intelligence, 1999.\n\nIn contrast to many previous works, our method aims to learn sub-policies automatically, without the need for hand engineering (in the paper you mentioned, they design running and leaping policies). In addition, we focus on the idea of sharing sub-policies over distributions of tasks, rather than on single tasks.", "The work is interesting but it is unclear how novel it is. There exists similar work that learns something very similar to the hierarchy in this paper.\n\nX. B. Peng, G. Berseth, and M. Van de Panne. 2016. Terrain-Adaptive Locomotion Skills Using Deep Reinforcement Learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2016) 35, 5. \n\nHowever, in this previous work the sub-policy is not a neural network but having the sub-policy not be a neural network is not a novel idea.", "The paper proposes to learn sub-policies (or motor primitives, options, etc.) jointly with a higher level policy by optimizing for multiple tasks simultaneously\n\n- Algorithm 1 makes it sound like all the learning happens sequentially, later on we learn that a multi-core setup is used. The details on this remain very vague. As far as I can guess we have 10 groups (with 12 cores each). All cores within a group are assigned the exact same task (shared theta), the parameters phi for the sub-policies is shared between all nodes. The text reads like the parameters theta are forgotten and the learning thereof is restarted with the exact same task. Hence we optimize for exactly 10 tasks in parallel?\n\n- Is the number of sup-policies pre-defined/hard-coded/hand-designed? What happens if you have too many/not enough?\n\n- The argumentation in Sect. 5 is vague and holds only if the tasks are learned sequentially. To me it sounds like you need to ensure that you don't update phi too much, otherwise it might unlearn something useful for the previous tasks. In the 2D moving bandit problem this seems to be achieved by only updating phi for a small amount of time.\n\n- I am not convinced that the above is solved by staggering the tasks in the asynchronous setting. While still in the warm-up phase (i.e,. learning theta) the agents associated to a certain task need to cope with the fact that the phi is changed simultaneously, hence they have to play catch-up with the changing representation while trying to improve their performance.\n\n- Another interesting experiment would be to test how much the system unlearns, e.g., by optimizing for a task, switching to a few other tasks, freezing phi and testing if the first task can still achieve the same performance\n\n- The plots Fig. 4/7 are a bit unclear. My guess is \"full training\" means learning from scratch as described in Sect. 6.1, \"sampled tasks\" means trying whether the learned sub-policies also work for a previously unseen task. Here again the question: What happens if you freeze phi? How well do phi updated on the new tasks work on the original ones? Related question: Why is there no plot on the combination task (Fig. 7) and full training on Four Rooms?\n\n- Sect. 6.4 \"series of tasks\" is a bit unclear\n\n- Sect. 6.4: Why is the ratio of warm-up and training so different compared to the 2D bandits? How much influence does this parameter have on the performance of the approach?\n\n\n\n" ]
[ 6, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyX0IeWAW", "iclr_2018_SyX0IeWAW", "iclr_2018_SyX0IeWAW", "iclr_2018_SyX0IeWAW", "HkdQtbZfM", "iclr_2018_SyX0IeWAW", "Hy28ObpbM", "ByaWalTbf", "S1YphBy-f", "BJN4gTtlM", "HyJuHTteM", "r1RR1Vclf", "r13qQpKxM", "rkCZR_3xz", "iclr_2018_SyX0IeWAW", "iclr_2018_SyX0IeWAW" ]
iclr_2018_B1EA-M-0Z
Deep Neural Networks as Gaussian Processes
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function. We further develop a computationally efficient pipeline to compute this covariance function. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks. Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks.
accepted-poster-papers
This paper presents several theoretical results linking deep, wide neural networks to GPs. It even includes illuminating experiments. Many of the results were already developed in earlier works. However, many at ICLR may be unaware of these links, and we hope this paper will contribute to the discussion.
train
[ "S1_Zyk9xG", "BJGq_QclM", "SJDCe5jeM", "rkKCfwTmf", "H1BQLITXf", "Hy8PZu_GG", "rye7Z_ufG", "Hkfpl_uff", "Syyjgu_zf", "Hy-veOdfG", "ryNSpTbGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "Neal (1994) showed that a one hidden layer Bayesian neural network, under certain conditions, converges to a Gaussian process as the number of hidden units approaches infinity. Neal (1994) and Williams (1997) derive the resulting kernel functions for such Gaussian processes when the neural networks have certain transfer functions.\n\nSimilarly, the authors show an analogous result for deep neural networks with multiple hidden layers and an infinite number of hidden units per layer, and show the form of the resulting kernel functions. For certain transfer functions, the authors perform a numerical integration to compute the resulting kernels. They perform experiments on MNIST and CIFAR-10, doing classification by scaled regression. \n\nOverall, the work is an interesting read, and a nice follow-up to Neal’s earlier observations about 1 hidden layer neural networks. It combines several insights into a nice narrative about infinite Bayesian deep networks. However, the practical utility, significance, and novelty of this work -- in its current form -- are questionable, and the related work sections, analysis, and experiments should be significantly extended. \n\n\nIn detail:\n\n(1) This paper misses some obvious connections and references, such as \n* Krauth et. al (2017): “Exploring the capabilities and limitations of Gaussian process models” for recursive kernels with GPs.\n* Hazzan & Jakkola (2015): “Steps Toward Deep Kernel Methods from Infinite Neural Networks” for GPs corresponding to NNs with more than one hidden layer.\n* The growing body of work on deep kernel learning, which “combines the inductive biases and representation learning abilities of deep neural networks with the non-parametric flexibility of Gaussian processes”. E.g.: (i) “Deep Kernel Learning” (AISTATS 2016); (ii) “Stochastic Variational Deep Kernel Learning” (NIPS 2016); (iii) “Learning Scalable Deep Kernels with Recurrent Structure” (JMLR 2017). \n\nThese works should be discussed in the text.\n\n(2) Moreover, as the authors rightly point out, covariance functions of the form used in (4) have already been proposed. It seems the novelty here is mainly the empirical exploration (will return to this later), and numerical integration for various activation functions. That is perfectly fine -- and this work is still valuable. However, the statement “recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework” is incorrect. For example, Hazzan & Jakkola (2015) in “Steps Toward Deep Kernel Methods from Infinite Neural Networks” consider GP constructions with more than one hidden layer. Thus the novelty of this aspect of the paper is overstated. \n\nSee also comment [*] later on the presentation. In any case, the derivation for computing the covariance function (4) of a multi-layer network is a very simple reapplication of the procedure in Neal (1994). What is less trivial is estimating (4) for various activations, and that seems to the major methodological contribution. \n\nAlso note that multidimensional CLT here is glossed over. It’s actually really unclear whether the final limit will converge to a multidimensional Gaussian with that kernel without stronger conditions. This derivation should be treated more thoroughly and carefully.\n\n(3) Most importantly, in this derivation, we see that the kernels lose the interesting representations that come from depth in deep neural networks. Indeed, Neal himself says that in the multi-output settings, all the outputs become uncorrelated. Multi-layer representations are mostly interesting because each layer shares hidden basis functions. Here, the sharing is essentially meaningless, because the variance of the weights in this derivation shrinks to zero. \nIn Neal’s case, the method was explored for single output regression, where the fact that we lose this sharing of basis functions may not be so restrictive. However, these assumptions are very constraining for multi-output classification and also interesting multi-output regressions.\n\n[*]: Generally, in reading the abstract and introduction, we get the impression that this work somehow allows us to use really deep and infinitely wide neural networks as Gaussian processes, and even without the pain of training these networks. “Deep neural networks without training deep networks”. This is not an accurate portrayal. The very title “Deep neural networks as Gaussian processes” is misleading, since it’s not really the deep neural networks that we know and love. In fact, you lose valuable structure when you take these limits, and what you get is very different than a standard deep neural network. In this sense, the presentation should be re-worked.\n\n(4) Moreover, neural networks are mostly interesting because they learn the representation. To do something similar with GPs, we would need to learn the kernel. But here, essentially no kernel learning is happening. The kernel is fixed. \n\n(5) Given the above considerations, there is great importance in understanding the practical utility of the proposed approach through a detailed empirical evaluation. In other words, how structured is this prior and does it really give us some of the interesting properties of deep neural networks, or is it mostly a cute mathematical trick? \n\nUnfortunately, the empirical evaluation is very preliminary, and provides no reassurance that this approach will have any practical relevance:\n(i) Directly performing regression on classification problems is very heuristic and unnecessary.\n(ii) Given the loss of dependence between neurons in this approach, it makes sense to first explore this method on single output regression, where we will likely get the best idea of its useful properties and advantages. \n(iii) The results on CIFAR10 are very poor. We don’t need to see SOTA performance to get some useful insights in comparing for example parametric vs non-parametric, but 40% more error than SOTA makes it very hard to say whether any of the observed patterns hold weight for more competitive architectural choices. \n\nA few more minor comments:\n(i) How are you training a GP exactly on 50k training points? Even storing a 50k x 50k matrix requires about 20GB of RAM. Even with the best hardware, computing the marginal likelihood dozens of times to learn hyperparameters would be near impossible. What are the runtimes?\n(ii) \"One benefit in using the GP is due to its Bayesian nature, so that predictions have uncertainty estimates (Equation (9)).” The main benefit of the GP is not the uncertainty in the predictions, but the marginal likelihood which is useful for kernel learning.", "This paper leverages how deep Bayesian NNs, in the limit of infinite width, are Gaussian processes (GPs). After characterizing the kernel function, this allows us to use the GP framework for prediction, model selection, uncertainty estimation, etc.\n\n\n- Pros of this work\n\nThe paper provides a specific method to efficiently compute the covariance matrix of the equivalent GP and shows experimentally on CIFAR and MNIST the benefits of using the this GP as opposed to a finite-width non-Bayesian NN.\n\nThe provided phase analysis and its relation to the depth of the network is also very interesting.\n\nBoth are useful contributions as long as deep wide Bayesian NNs are concerned. A different question is whether that regime is actually useful.\n\n\n- Cons of this work\n\nAlthough this work introduces a new GP covariance function inspired by deep wide NNs, I am unconvinced of the usefulness of this regime for the cases in which deep learning is useful. \n\nFor instance, looking at the experiments, we can see that on MNIST-50k (the one with most data, and therefore, the one that best informs about the \"true\" underlying NN structure) the inferred depth is 1 for the GP and 2 for the NN, i.e., not deep. Similarly for CIFAR, where only up to depth 3 is used. None of these results beat state-of-the-art deep NNs.\n\nAlso, the results about the phase structure show how increased depth makes the parameter regime in which these networks work more and more constrained. \n\nIn [1], it is argued that kernel machines with fixed kernels do not learn a hierarchical representation. And such representation is generally regarded as essential for the success of deep learning. \n\nMy impression is that the present line of work will not be relevant for deep learning and will not beat state-of-the-art results because of the lack of a structured prior. In that sense, to me this work is more of a negative result informing that to be successful, deep Bayesian NNs should not be wide and should have more structure to avoid reaching the GP regime.\n\n\n- Other comments:\n\nIn Fig. 5, use a consistent naming for the axes (bias and variances).\n\nIn Fig. 1, I didn't find the meaning of the acronym NN with no specified width.\n\nDoes the unit norm normalization used to construct the covariance disallow ARD input selection?\n\n\n[1] Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. The Curse of Dimensionality for Local Kernel Machines. 2005.", "This paper presents a new covariance function for Gaussian processes (GPs) that is equivalent to a Bayesian deep neural network with a Gaussian prior on the weights and an infinite width. As a result, exact Bayesian inference with a deep neural network can be solved with the standard GP machinery.\n\n\nPros:\n\nThe result highlights an interesting relationship between deep nets and Gaussian processes. (Although I am unsure about how much of the kernel design had already appeared outside of the GP literature.)\n\nThe paper is clear and very well written.\n\nThe analysis of the phases in the hyperparameter space is interesting and insightful. On the other hand, one of the great assets of GPs is the powerful way to tune their hyperparameters via maximisation of the marginal likelihood but the authors have left this for future work!\n\n\nCons:\n\nAlthough the computational complexity of computing the covariance matrix is given, no actual computational times are reported in the article.\n\nI suggest using the same axis limits for all subplots in Figure 3.", "Below is a summary of the most salient revisions: \n\n—updated discussion of relevant work suggested by reviewers\n—more discussion of implementation details (e.g. computation time)\n—figures with updated axes ranges/labels or captions\n—citation to a parallel ICLR 2018 submission, “Gaussian Process Behavior in Wide Deep Neural Networks.”\n—an added appendix describing an alternative derivation of the GP correspondence via marginalization over intermediate layers", "(We have been in email correspondence with the commenters of the reproducibility assessment; the following details our correspondence, provided for clarification.)\n\nThank you for the interest in our work and putting careful effort to reproduce our results! To clarify, we haven’t made our research code public yet, but, as denoted in the submission, are working towards making it open sourced.\n\nTo address some specific concerns in the report:\n\n-- “Notably, the LUT only gave good accuracy on MNIST with training data size larger than 2000.”\n\nWe are surprised to hear that. Looking into the github codebase, we’ve noticed two possible causes. \n\n(1). First, we’ve noticed that GP regression was done using the actual inverse of K_DD matrix. In practice taking the inverse for solving linear systems equations is known to be numerically unstable, especially when the condition number is large. Especially in Table 2, the smaller dataset’s best performing depth was quite high (100) and deeper kernels become more degenerate requiring careful numerics. Results of pure random accuracy (0.1) is one outcome of unstable numerics. In practice, we used Cholesky decomposition to solve linear systems equations, which is faster and numerically more stable. Also relatedly adding \\sigma_epsilon to K_DD helps with numerical stability. We thank you for noticing that we do not report the value we use in our experiments. We have been using 1e-10 as our default value and kept multiplying it by 10 when Cholesky decomposition failed due to numerical instability.\n\nIn general, we recommend using Cholesky decomposition utilizing positive semi-definiteness of the covariance matrix, or linear systems solver / pseudo-inverse to make regression stable. \n\n(2). Second, it appears that numerical values for \\sigma_w and \\sigma_b used in the code seem to be \\sigma_w^2, \\sigma_b^2 (variance instead of std. deviation) in our paper. If this is the case, the poor performance for deeper kernels is understandable. The phase diagram for deep networks show that the (\\sigma_w^2, \\sigma_b^2) pair is quite sensitive in obtaining good performance. Numerical values for variance to standard deviation will be quite different, and we are worried that this might have caused not obtaining as good a performance as ours. \n\nWe thank you for pointing out information that wasn’t readily provided in the paper. We’ll incorporate your suggestions to make the paper more easily reproducible. \n\nIn regards to u_max, the value you’ve chosen (18) should to be good. In our experiments we’ve used 10 but after submission to ensure numerical accuracy for all range of variance (0, s_max =100), we preferred using larger u_max of either 50 or 100. \n\nAlso, regarding c_j = 1, in the lookup table we restrict ourselves to | c_j |<0.99999. The footnote part refers to the diagonal elements (variance), where we separately performed 1d Gaussian integral over \\phi(z)^2 to construct one dimensional grid and interpolate with it. \n\nThank you again for your careful assessment! We are grateful that you chose our paper and made valuable suggestions to make our paper easier to reproduce. Also, we encourage you to see if the numerically stable linear algebra solver and correcting values for (\\simga_w^2, \\sigma_b^2) would bring your GP results close to what we obtain and more competitive to the neural network results. \n", "We thank the reviewer for their time and constructive feedback on the submission. \n\n-- references\n\nWe thank the reviewer for suggesting related works. In the revised version, we will add Krauth et al. (2017) as well as additional comparisons with the deep kernel learning literature.\n\n-- novelty with regards to Hazan & Jaakola (2015)\n\nWe do not believe that the work in H-J significantly detracts from the novelty of our paper.\n\nH-J is also interested in constructing kernels equivalent to infinitely wide deep neural networks. Theorem 1 in H-J is a good stepping stone for our construction. However the H-J construction does not go beyond two hidden layers with nonlinearities. They state:\n“We present our framework with only two intermediate layers ... It can be extended to any depth but the higher layers may not use nonlinearities.\" H-J\n\nWe believe that the fact that H-J approached the same problem, and only derived a GP kernel for up to two layers, despite making use of the same random kernel literature we do, is illustrative of the non-obvious nature of the equivalence between infinitely wide networks of arbitrary depth and GPs.\n\nWe will expand our existing discussion of H-J in the text, and state that previous work has proposed GP kernels for networks with up to two hidden layers.\n\n--“In any case, the derivation for computing the covariance function (4) of a multi-layer network is a very simple reapplication of the procedure in Neal (1994).”\n\nWe agree that the derivation is simple. We believe that this, combined with the fact that it has gone unpublished for more than two decades, increases rather than detracts from its significance.\n\n --“Also note that multidimensional CLT here is glossed over. It’s actually really unclear whether the final limit will converge to a multidimensional Gaussian with that kernel without stronger conditions. This derivation should be treated more thoroughly and carefully.” \n\nThank you for sharing your concerns. We would very much like to address them. Could you be more specific about the ways in which you are concerned the CLT may fail in this case? If we take the infinite-width limit layer-by-layer, the application of the CLT seems appropriate without additional subtlety.\n\n-- “In fact, you lose valuable structure when you take these limits, and what you get is very different than a standard deep neural network. In this sense, the presentation should be re-worked.”\n\nWe agree that the qualitative behavior of infinitely wide neural networks may be different than that of narrow networks. We will update the text to more clearly discuss this.\n\nWe note though that finite width network performance often increases with increasing network width, as they become closer to the GP limit. For example, see [1], [2]. In fact in our Figure 1, we found that the performance of finite width networks increases, and more closely resembles that of the NNGP, as the network is made wider.\n\nTo more thoroughly address this concern and support this observation, we performed an additional experiment where we trained 5 layer fully connected networks with Tanh and ReLU nonlinearities on CIFAR10, with random optimization and initialization hyperparameters. We then filtered for training runs which achieved 100% classification accuracy on the training set, resulting in 125 Tanh and 55 ReLU networks. We then examined the performance of these networks vs. network width. We found that the best performing networks are in fact the widest. See the following figures, where each point shows the width and corresponding generalization gap of a single trained 5 layer network, with 100% training accuracy:\nhttps://www.dropbox.com/s/np4myfzy1a3ts46/relu_depth_5_gap_to_width_cifar10.pdf\nhttps://www.dropbox.com/s/f1cd73hvpesm8n2/tanh_depth_5_gap_to_width_cifar10.pdf\n\n-- “Moreover, neural networks are mostly interesting because they learn the representation. To do something similar with GPs, we would need to learn the kernel. But here, essentially no kernel learning is happening. The kernel is fixed.”\n\nWe agree that the learned representations are one important aspect of deep networks, and we agree that no explicit representation learning happens in our GP approach.\n\nHowever, we emphasize that in many situations deep networks are chosen not for their interpretable representations, but rather because of the high accuracy of their predictions. We believe that work that reproduces the predictions made by deep networks using an alternative procedure is useful even if it does not also reproduce the internal representations of deep networks.\n\nWays to sample deep representations from the corresponding NNGP would be a fascinating avenue for future research.\n\n[1] Neyshabur B, Tomioka R, Srebro N. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv:1412.6614. 2014.\n\n[2] Zagoruyko S, Komodakis N. Wide residual networks. arXiv:1605.07146. 2016.", "With regards to the comments on empirical results:\n\n-- “(i) … regression on classification problems is very heuristic and unnecessary.”\n\nWe do make clear that these experiments using regression for classification are less principled, in the main text. However, we’d like to note that least-squares classification is widely used and effective [3]. Moreover, it allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (e.g. MNIST and CIFAR-10).\n\n-- “3) Most importantly, in this derivation, we see that the kernels lose the interesting representations that come from depth in deep neural networks. Indeed, Neal himself says that in the multi-output settings, all the outputs become uncorrelated. Multi-layer representations are mostly interesting because each layer shares hidden basis functions. Here, the sharing is essentially meaningless, because the variance of the weights in this derivation shrinks to zero. \nIn Neal’s case, the method was explored for single output regression, where the fact that we lose this sharing of basis functions may not be so restrictive. However, these assumptions are very constraining for multi-output classification and also interesting multi-output regressions.”\n\n“(ii) Given the loss of dependence between neurons in this approach, it makes sense to first explore this method on single output regression, where we will likely get the best idea of its useful properties and advantages. ”\n\nThis is an excellent point, which applies to almost all GP work. Based on your recommendation, we are looking into single-output regression tasks.\n\nHowever, we would like to emphasize that despite the NNGP being unable to explicitly capture dependencies between classes it still could outperform neural networks on multi-class regression. We believe this provides stronger, rather than weaker, evidence for the utility of the NNGP formulation.\n\n-- “(iii) The results on CIFAR10 are very poor.”\n\nFirst we would like to emphasize that the purpose of experiments was to show that the NNGP is the limiting behaviour for a specified neural network architecture. Because the GP equivalence was derived for vanilla fully-connected networks, all experiments were performed using that architecture. Achieving state-of-the-art on CIFAR-10 typically involves using a convolutional architecture, as well as data augmentation, batch-norm, residual connection, dropout, etc. \n\nRestricting to vanilla multi-layer fully-connected networks with ReLU activation, the performance quoted in [4] is actually slightly lower than our GP results (53-55% accuracy, Figure 4 (a) of [4]). So our baseline and results are not poor for the class of models we examine. Our experiments show that, for the given class of neural network architecture, as width increases the behaviour more closely resembles that of the NNGP, which is competitive or better than that of the given neural network class. \n\nWe note that introducing linear bottleneck layer structure in [4] seem to achieve SOTA in permutation invariant (without convolutional layers) CIFAR-10 which is higher than ours. It is an interesting question how this type of model relates to the GP limit but it is outside the scope of this work.\n\nRegards to the other comments:\n\n (i) Exact GP computation in the large data regime can be costly. We used a machine with 150 GB of RAM (with some inefficiencies in memory usage, e.g. stemming from use of float64, and TensorFlow retaining intermediate state in memory), and 64 CPU cores, to run the full MNIST/CIFAR-10 experiments. We utilized parallel linear algebra computations available through TensorFlow to speed up computations. For a typical run, constructing the kernel per layer took 90-140 seconds, and solving the linear equations (via Cholesky decomposition) took 180-220 seconds for 1000 test points. \n\n (ii) We agree with the reviewer that one strength of Bayesian methods is providing marginal likelihood and using that for model selection. Although we propose this possibility for future work in the text, greater emphasis could have been made. With that said, we believe that providing uncertainty estimates is another important benefit of a Bayesian approach, that we explore experimentally in our text, and that the GP perspective on neural networks is beneficial in this regard as well.\n\n\n[3] Ryan Rifkin and Aldebaro Klautau. In defense of one-vs-all classification. Journal of machine learning research, 5(Jan):101–141, 2004.\nRyan Rifkin, Gene Yeo, Tomaso Poggio, et al. Regularized least-squares classification. Nato Science Series Sub Series III Computer and Systems Sciences, 190:131–154, 2003.\n\n[4] Zhouhan Lin, Roland Memisevic, Kishore Konda, How far can we go without convolution: Improving fully-connected networks, arXiv 1511.02580.\n", "We thank the reviewer for their time and constructive feedback on the submission. \n\n-- Usefulness of the regime.\n\nAs noted, the best performing depth for the NNGP for full datasets was shallow (depth 1 in MNIST and depth 3 in CIFAR-10). A few points about this:\n i) For these datasets, the best performing neural network is also shallow (depth 2 in MNIST and depth 3 or 2 in CIFAR-10). As our NNGP construction is the limiting behaviour of wide neural networks, we believe that the GP performing best with a shallow depth is consistent with this equivalence. \n ii) For the small data-regime, we note that there were benefits from increased depth, both for the NN and the NNGP. \n iii) We also note that when the dataset became more complex (MNIST to CIFAR-10) the GP and NN both benefited from additional depth.\n iv) All experiments in our paper were performed in the fully connected case, where the evidence for the benefits of hierarchy+depth is weaker than for convolutional networks.\n v) Lastly, although the best performing depth are shallow, the deep NNGPs perform competitively with the shallow ones. For example, with RELU the depth 10 NNGP for MNIST-50k has test accuracy of 0.987, and for CIFAR-45k with RELU has test accuracy 0.5573. The best performing accuracy for those cases was 0.9875 and 0.5566 respectively. (Note that for CIFAR depth 10 test accuracy is actually *higher* than depth-3, this is due to model selection based on the validation set rather than the test set.) The performance loss from depth in NNs is much larger, possibly due to harder optimization. \n\n-- “In [Bengio, Delalleau, and Le Roux], it is argued that kernel machines with fixed kernels do not learn a hierarchical representation. And such representation is generally regarded as essential for the success of deep learning. \n\nMy impression is that the present line of work will not be relevant for deep learning and will not beat state-of-the-art results because of the lack of a structured prior. In that sense, to me this work is more of a negative result informing that to be successful, deep Bayesian NNs should not be wide and should have more structure to avoid reaching the GP regime.”\n\nFirst, we would like to note that finite width network performance often increases monotonically with increasing network width, as the networks become closer to a GP limit. For example, see [1], [2]. In fact in our Figure 1, we found that the performance of finite width networks increases, and more closely resembles that of the NNGP, as the network is made wider.\n\nTo more thoroughly address this concern and support this observation, we performed an additional experiment where we trained 5 layer fully connected networks with tanh and ReLU nonlinearities on CIFAR10, with random optimization and initialization hyperparameters. We then filtered for training runs which achieved 100% classification accuracy on the training set, resulting in 125 Tanh and 55 ReLU networks. We then examined the performance of these networks vs. network width. We found that the best performing networks are in fact the widest. See the following figures, where each point shows the width and corresponding generalization gap of a single trained 5 layer network, with 100% training accuracy:\nhttps://www.dropbox.com/s/np4myfzy1a3ts46/relu_depth_5_gap_to_width_cifar10.pdf\nhttps://www.dropbox.com/s/f1cd73hvpesm8n2/tanh_depth_5_gap_to_width_cifar10.pdf\n\nSecond, we would like to address the concerns about kernel methods which the reviewer cites from Bengio, Delalleau, and Le Roux (BDL). The analysis in BDL assumes a local kernel (e.g. an RBF kernel). The NNGP kernel is non-local and heavy tailed, as can be seen in the Figure showing its angular structure in Appendix B of our paper. Specifically, Equation 10 in BDL, which demands that the kernel approaches a constant with increasing distance between points, does not hold for the NNGP kernel: As discussed in our paper, inputs are scaled to have constant norm -- i.e. all inputs live on the surface of a hypersphere. There is no angular separation between points on the hypersphere after which the NNGP kernel goes to a constant (again see Appendix B figure).\n\nFinally, we are not sure if we have fully understood your concern about the lack of a structured prior. If the above responses do not address your concern, could you be more specific about what structure is required in a prior of functions which a GP is unable to capture?\n\n[1] Neyshabur B, Tomioka R, Srebro N. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv:1412.6614. 2014.\n\n[2] Zagoruyko S, Komodakis N. Wide residual networks. arXiv:1605.07146. 2016.", "-- Fixed Kernel machines vs representation learning of deep neural networks\n\nWhile the functional form of our GP kernel is fixed, and no kernel learning is happening in the sense of Deep Kernel Learning [3], we do learn hyper-parameters (induced by neural network architecture) for kernels in our experiments by grid search. Using GP marginal likelihood, one could learn hyper-parameters for the equivalent neural network by end-to-end gradient descent as well. \n\nAlthough our NNGP does not admit explicit hierarchical representation learning, we note that our experiments showing that an NNGP can perform better than its finite width counterpart suggest interesting scientific question on the role of learned representations. Exploring ways to sample intermediate representations from the posterior implied by the NNGP would be a fascinating direction for future work.\n\nRegards to the other comments:\n\n-- In Fig. 5, use a consistent naming for the axes (bias and variances).\n\nThank you for noticing this. We will update the figures in the revised version.\n\n-- In Fig. 1, I didn't find the meaning of the acronym NN with no specified width.\n\nWe will include the description in the revised version. The acronym NN in the figure denotes the best performing (on the validation set) neural network across all width and trials. Often this is the neural network with the largest width. \n\n-- “Does the unit norm normalization used to construct the covariance disallow ARD input selection?”\n\nThank you for bringing up the point about ARD. With extra computational and memory cost, unit normalization for inputs could be avoided by separately tiling the variance of each input when constructing the lookup table in Section 2.5. Also note, input pre-processing in general can change ARD scores, and scaling inputs to have a constant norm is not an uncommon form of pre-processing.\n\n\nThank you again for your careful review! We believe we have effectively addressed your primary concern about the relevance of the wide network limit, and we hope you will consider raising your score as a result.\n\n[3] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing, Deep Kernel Learning. AISTATS 2016.", "Thank you for your time and constructive suggestions on the submission. \n\n-- “Although the computational complexity of computing the covariance matrix is given, no actual computational times are reported in the article.”\n\nWe are grateful the suggestion. In the revised version we will add computation time for full MNIST for reference. As one datapoint, when constructing the 50k x 50k covariance matrix, the amortized computations for each layer take 90 -140s (depending on CPU generation and network depth), running on 64 CPUs. \n\n--”I suggest using the same axis limits for all subplots in Figure 3.”\n\nWe will update the figures accordingly in the revised version.\n", "Availability of Requisite Information: The appendix of this paper does a good job outlining most of the necessary information for reproducing the results. The computation of the Gaussian process is thorough and well described, with the exception of some information regarding the lookup table. The values n_g, n_v, and n_c were all provided, however a value for u_max was not provided, nor was it explained how it was determined. With regards to testing, the authors provide the neural network and Gaussian process configurations used to obtain their best results, as well as their random search ranges for various hyper-parameters. Furthermore, the data sets on which they trained and tested their algorithms are publicly available and readily accessible.\n\nComputational Resources Required: The Gaussian process algorithms were run on a laptop with 8 GB of RAM, and a dual core 2.9GHz processor. The kernel matrix for the full MNIST training set is over 20 GB. Furthermore, this matrix needs to be inverted for prediction. Due to memory requirements, the results for 20,000 and 50,000 training points could not be computed. With matrix multiplication and parallel computing, the lookup table, which theoretically is O((n_g)^2 n_v n_c), took just under an hour to calculate for the values n_g = 500, n_v = 501,and n_c = 500.The accuracies for the Gaussian processes in Table 2 were all computed (with the exception of 20,000, and 50,000 sizes) overnight. Finally we estimated that reproducing Figure 7, MNIST=5,000, d=50, tanh would have taken us approximately 60 hours.\n\nPrior Knowledge Required: The paper does a good job giving an overview of the theoretical results at the beginning of the paper. They outline how to construct the kernel, as well as how to use the kernel to do prediction. However, the value for u_max perhaps required a deeper theoretical understanding to determine. Further, the lookup table algorithm is not well-defined. For values of c = ±1and s = 0,the lookup table is defined with the inverse of a singular matrix. The case of c = 1is addressed, however the solution to these singularities is not explicitly given.\n\nResults: The Gaussian process calculations were reproducible, although the accuracies we obtained did not perfectly match those provided in the paper. Notably, the LUT only gave good accuracy on MNIST with training data size larger than 2000. The replicated baseline accuracies overall were quite close to those stated in the paper. They typically were slightly lower, likely due to the differences in hyperparameter optimization. The authors of the paper had access to Google’s Vizier technology, which is not universally available. Despite this, using the same model depths and widths, values within ~5% were obtainable.\n\nConclusions: The time for reproducing Table 2 was reasonable, though we would not have been able to reproduce the optimization of the GP hyper-parameters given our computational resources. We were also unable to reproduce the deep-signal propagation heat maps given our resources and time constraints. There is nothing to suggest these results would not have been reproducible, however, given additional resources. The numerical kernel implementation is arguably the central contribution of this paper. Given the information provided in the paper, we could not reproduce the results from this numerical algorithm. It would have helped to have the original source code to guide us, especially with respect to handling the singular cases of the lookup table. The source code does not seem to be available online. We have contacted the authors to ask whether it was or could be made available, but we had not received an answer by the time of writing. In conclusion, this paper was fairly reproducible, however requires a high level of computational power and theoretical knowledge. Specifying u_max and numerically stable solutions to their lookup table would have aided in the paper’s reproducibility.\n\nOur full report can be found at: https://github.com/niklasbrake/COMP-551-Project-4/blob/master/Final%20Report.pdf" ]
[ 4, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1EA-M-0Z", "iclr_2018_B1EA-M-0Z", "iclr_2018_B1EA-M-0Z", "iclr_2018_B1EA-M-0Z", "ryNSpTbGG", "S1_Zyk9xG", "S1_Zyk9xG", "BJGq_QclM", "BJGq_QclM", "SJDCe5jeM", "iclr_2018_B1EA-M-0Z" ]
iclr_2018_SyqShMZRb
Syntax-Directed Variational Autoencoder for Structured Data
Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches.
accepted-poster-papers
This paper presents a more complex version of the grammar-VAE, which can be used to generate structured discrete objects for which a grammar is known, by adding a second 'attribute grammar', inspired by Knuth. Overall, the idea is a bit incremental, but the space is wide open and I think that structured encoder/decoders is an important direction. The experiments seem to have been done carefully (with some help from the reviewers) and the results are convincing.
train
[ "SJy4ZMU4G", "rJ7ZTaYxf", "ByHD_eqxf", "SkUs6e5lG", "BJVJ4_6XG", "H1KTirvZf", "ryFoAfUWz", "S1EYCMIbz", "B1396zI-f", "HJMcxIJWf", "BkQglHagz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The presentation of the paper has definately improved, but I find the language used in the paper still below the quality needed for publication. There are still way too many grammatical and syntactical errors. ", "The paper presents an approach for improving variational autoencoders for structured data that provide an output that is both syntactically valid and semantically reasonable. The idea presented seems to have merit , however, I found the presentation lacking. Many sentences are poorly written making the paper hard to read, especially when not familiar with the presented methods. The experimental section could be organized better. I didn't like that two types of experiment are now presented in parallel. Finally, the paper stops abruptly without any final discussion and/or conclusion. ", "Let me first note that I am not very familiar with the literature on program generation, \nmolecule design or compiler theory, which this paper draws heavily from, so my review is an educated guess. \n\nThis paper proposes to include additional constraints into a VAE which generates discrete sequences, \nnamely constraints enforcing both semantic and syntactic validity. \nThis is an extension to the Grammar VAE of Kusner et. al, which includes syntactic constraints but not semantic ones.\nThese semantic constraints are formalized in the form of an attribute grammar, which is provided in addition to the context-free grammar.\nThe authors evaluate their methods on two tasks, program generation and molecule generation. \n\nTheir method makes use of additional prior knowledge of semantics, which seems task-specific and limits the generality of their model. \nThey report that their method outperforms the Character VAE (CVAE) and Grammar VAE (GVAE) of Kusner et. al. \nHowever, it isn't clear whether the comparison is appropriate: the authors report in the appendix that they use the kekulised version of the Zinc dataset of Kusner et. al, whereas Kusner et. al do not make any mention of this. \nThe baselines they compare against for CVAE and GVAE in Table 1 are taken directly from Kusner et. al though. \nCan the authors clarify whether the different methods they compare in Table 1 are all run on the same dataset format?\n\nTypos:\n- Page 5: \"while in sampling procedure\" -> \"while in the sampling procedure\"\n- Page 6: \"a deep convolution neural networks\" -> \"a deep convolutional neural network\"\n- Page 6: \"KL-divergence that proposed in\" -> \"KL-divergence that was proposed in\" \n- Page 6: \"since in training time\" -> \"since at training time\"\n- Page 6: \"can effectively computed\" -> \"can effectively be computed\"\n- Page 7: \"reset for training\" -> \"rest for training\" ", "NOTE: \n\nWould the authors kindly respond to the comment below regarding Kekulisation of the Zinc dataset? Fair comparison of the data is a serious concern. I have listed this review as a good for publication due to the novelty of ideas presented, but the accusation of misrepresentation below is a serious one and I would like to know the author's response.\n\n*Overview*\n\nThis paper presents a method of generating both syntactically and semantically valid data from a variational autoencoder model using ideas inspired by compiler semantic checking. Instead of verifying the semantic correctness offline of a particular discrete structure, the authors propose “stochastic lazy attributes”, which amounts to loading semantic constraints into a CFG and using a tailored latent-space decoder algorithm that guarantees both syntactic semantic valid. Using Bayesian Optimization, search over this space can yield decodings with targeted properties.\n\nMany of the ideas presented are novel. The results presented are state-of-the art. As noted in the paper, the generation of syntactically and semantically valid data is still an open problem. This paper presents an interesting and valuable solution, and as such constitutes a large advance in this nascent area of machine learning.\n\n*Remarks on methodology*\n\nBy initializing a decoding by “guessing” a value, the decoder will focus on high-probability starting regions of the space of possible structures. It is not clear to me immediately how this will affect the output distribution. Since this process on average begins at high-probability region and makes further decoding decisions from that starting point, the output distribution may be biased since it is the output of cuts through high-probability regions of the possible outputs space. Does this sacrifice exploration for exploitation in some quantifiable way? Some exploration of this issue or commentary would be valuable. \n\n*Nitpicks*\n\nI found the notion of stochastic predetermination somewhat opaque, and section 3 in general introduces much terminology, like lazy linking, that was new to me coming from a machine learning background. In my opinion, this section could benefit from a little more expansion and conceptual definition.\n\nThe first 3 sections of the paper are very clearly written, but the remainder has many typos and grammatical errors (often word omission). The draft could use a few more passes before publication.\n", "In addition to our revision 1, in which we extensively revised all experiments involving ZINC dataset, we have made an updated revision 2 which mostly addresses the writing and presentation issues. Besides the refinement of wording and typos, this version includes the following modification:\n\n1) We added Figure 2, where we explicitly show how the modern compiler works through the example of two-stage check (i.e., CFG parsing and Attribute Grammar check). Section 2 is now augmented with more detailed explanations of background knowledge.\n\n2) We added Figure 3, which shows the proposed syntax-directed decoder step by step through an example. Through the examples we put more effort in explaining key concepts in our method, such as ‘inherited constraints’ and ‘lazy linking’. \n\n3) Experiment section is revised with more details included. \n\n4) We added a conclusion section as suggested by the reviewer. \n", "To avoid further possible misunderstandings we have update our paper, in which we have extensively revised all experiments involving ZINC dataset. This addresses concerns on use of ZINC data and comparison with previous methods. \n\nThe conclusion in each experiment **remains the same** though some differences are observed. Examples of differences are as following: Our reconstruction performance is boosted (76.2% vs 72.8%); And since we didn’t address semantics specific to aromaticity by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE).\n\nPlease find the updated paper for more details.", "We thank you for providing reviews.\n\nWe’ll refine the paper to include more introduction about background, and more detailed explanations about our method. \n\nWe’ll include final discussion/conclusion section. \n", "Thanks for your effort in providing this detailed and useful review! \n\nWe present our clarification in the following:\n\n>> Use of data and comparison with baselines:\n\nWe would first note that the anonymous accusation was set to “17 Nov 2017 (modified: 28 Nov 2017), readers: ICLR 2018 Conference Reviewers and Higher”. That’s why it was not visible to us until Nov 28, i.e., the original review release date. This gives us no chance to clarify anything before the review deadline. We have replied to it actively since Nov 28. \n**Note the thread is invisible to us again since Dec 2. **\n\n1) We have experimented both kekulization and non-kekulization for baselines, and have reported the best they can get in all experiments. For example, in Table 2 the GVAE baseline results are improved compared to what was reported in GVAE paper.\n\n2) The anonymous commenter is using different kekulization (RDKIT, rather than our used Marvin), different baseline implementation (custom implementation, rather than the public one in GVAE’s paper) and possibly different evaluation code (since there is no corresponding evaluation online). For a reproducible comparision, we released our implementation, data, pretrained model and evaluation code at: https://github.com/anonymous-author-80ee48b2f87/cvae-baseline\n\n3) To make further clarification, we ran our method on the vanilla (non-kekulised) data. Our performance is actually boosted (76.2% vs 72.8% reported in the paper).\nThe details of results from these experiments above can be seen in our public reply titled “We released baseline CVAE code, data and evaluation code for clarification” and “Our reconstruction performance without kekulization on Zinc dataset”. \n\nIn either setting still, our method outperforms all baselines on reconstruction. We are sorry that this may have led to some confusions. To avoid further possible misunderstandings, we have extensively rerun all experiments involving ZINC dataset. Though differences are observed, the conclusion in each experiment remains the same. For example, our reconstruction performance is boosted (76.2% vs 72.8%). Since we didn’t address aromaticity semantics by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE). Please find the updated paper for more details. \n\n>> prior knowledge and limitations \n\nWe are targeting on domains where strict syntax and semantics are required. For example, the syntax and semantics are needed to compile a program, or to parse a molecule structure. So such prior knowledge comes naturally with the application. Our contribution is to incorporate such existing syntax and semantics in those compilers, into an on-the-fly generation process of structures. \n\nIn general, when numerous amount of data is available, a general seq2seq model would be enough. However, obtaining the useful drug molecules is expensive, and thus data is quite limited. Using knowledges like syntax (e.g., in GVAE paper), or semantics (like in our paper) will greatly reduce the amount of data needed to obtain a good model.\n\nIn our paper, we only addressed 2-3 semantic constraints, where the improvement is significant. Similarly, in “Harnessing Deep Neural Networks with Logic Rules (Hu et.al, ACL 16)”, incorporating several intuitive rules can greatly improve the performance of sentiment analysis, NER, etc. So we believe that, incorporating the knowledge with powerful deep learning achieves a good trade-off between human efforts and model performance. \n\n>> Typos and other writing issue:\n\nWe thank you very much for your careful reading and pointing out the typos and writing issues in our manuscript! We have incorporated your suggested changes in the current revision, and are keeping conducting further detailed proofreading to fix as much as possible the writing issues in the future revisions.", "Thanks for your effort in providing this detailed and constructive review! \nWe present our clarification in the following:\n\n>>NOTE:\n\nWe would first note that the anonymous accusation was set to “17 Nov 2017 (modified: 28 Nov 2017), readers: ICLR 2018 Conference Reviewers and Higher”. That’s why it was not visible to us until Nov 28, i.e., the original review release date. This gives us no chance to clarify anything before the review deadline. We have replied to it actively since Nov 28. \n**Note the thread is invisible to us again since Dec 2. **\n\nTo summarize our clarification: \n\n>> Use of data\n\n1) We have experimented both kekulization and non-kekulization for baselines, and have reported the best they can get in all experiments. For example, in Table 2 the GVAE baseline results are improved compared to what was reported in GVAE paper.\n\n2) The anonymous commenter is using different kekulization (RDKIT, rather than our used Marvin), different baseline implementation (custom implementation, rather than the public one in GVAE’s paper) and possibly different evaluation code (since there is no corresponding evaluation online). For a reproducible comparision, we released our implementation, data, pretrained model and evaluation code at: https://github.com/anonymous-author-80ee48b2f87/cvae-baseline\n\n3) To make further clarification, we ran our method on the vanilla (non-kekulised) data. Our performance is actually boosted (76.2% vs 72.8% reported in the paper).\nThe details of results from these experiments above can be seen in our public reply titled “We released baseline CVAE code, data and evaluation code for clarification” and “Our reconstruction performance without kekulization on Zinc dataset”. \n\nIn either setting still, our method outperforms all baselines on reconstruction. We are sorry that this may have led to some confusions. To avoid further possible misunderstandings, we have extensively rerun all experiments involving ZINC dataset. Though differences are observed, the conclusion in each experiment remains the same. For example, our reconstruction performance is boosted (76.2% vs 72.8%). Since we didn’t address aromaticity semantics by the paper submission deadline, the valid prior fraction drops to 43.5%, but it is still much higher than baselines (7.2% GVAE, 0.7% CVAE). Please find the updated paper for more details. \n\n>>sacrifice of exploration\n\nCVAE, GVAE and our SD-VAE are all factorizing the joint probability of entire program / SMILES text in some way. CVAE factorizes in char level, GVAE in Context Free Grammar (CFG) tree, while ours factorizes both CFG and non-context free semantics. Since every method is factorizing the entire space, each structure in this space should have the possibility (despite its magnitude) of being sampled. \n\nBias is not always a bad thing. Some bias will help the model quickly concentrate to the correct mode. Definitely, different methods will bias the distribution in a different way. For example, CVAE is biased towards the beginning of the sequence. GVAE is biased by several initial non-terminals. \n\nOur experiments on diversity of generated molecules (table 3) demonstrate that, both GVAE and our method can generate quite diverse molecules. So we think both methods don’t have noticeable mode collapse problem on this dataset.\n\n>> writings:\n\nThanks for the suggestions. We are adding more effort in explaining our algorithm and improve writing in revisions. We have revised our experiments sections for clarifying the most important issue, and will keep improving the writing.\n\nTo briefly answer the “lazy linking”: We don’t sample the actual value of the attribute at the first encounter; Instead, later when the actual content is generated, we use bottom-up calculation to fill the value. For example, when generating ringbond attribute, we only sample its existence. The ringbond information (bond index and bond type) are filled later. \n\nAs a side note, this idea comes from “lazy evaluation” in compiler theory where a value is not calculated until it is needed.\n", "To further clarify the reconstruction accuracy, we here report performance (our model and baselines) without using the kekulization transformation on Zinc dataset, in supplement to numbers using kekulization already reported in our manuscript. We include baseline results from GVAE paper for direct comparison. \n\nSD-VAE (ours): 76.2%; GVAE: 53.7%; CVAE: 44.6%\n\nCompare to what reported for SD-VAE with kekulization in current revision (72.8%), our performance is slightly boosted without kekulization. This shows that kekulization itself doesn’t have positive impact for reconstruction in our method. Our conclusion that the reconstruction accuracy of our SD-VAE is much better than all baselines still holds. \n\nNevertheless, to avoid possible misunderstanding, we’ll refine the experiment section by including more experiments, once the open review system allows. \n", "To address the anonymous commenter’s concerns on the CVAE baseline, the initial release of CVAE’s code (training code based on GVAE’s authors’code), with two versions of kekule data and vanilla data and the reconstruction evaluation script, are available at \n\nhttps://github.com/anonymous-author-80ee48b2f87/cvae-baseline \n\nwhere we also uploaded our trained CVAE, together with pretrained model obtained from GVAE’s authors. \n\nHere we briefly summarize the current results:\n(1) - CVAE, vanilla setting, pretrained model : 44.854%\n(2) - CVAE, vanilla setting, our retraining: 43.218%\n(3) - CVAE, Marvin Suite kekulised **tried for all methods in our paper**: 11.6%\n(4) - CVAE, rdkit kekulised (provided by anonymous commenter, never been tried in our paper): 38.17% \n\nWe reported the best form of SMILES for CVAE in our paper. If you believe there’s any issue, please let us know asap and we are happy to investigate.\n\nFinally, we thank all the anonymous comments about the paper. If you have any concerns about the paper, please make the comments public while you specifying readers. Making such comments to reviewers only will not allow us to address the possible misunderstandings, or improve the paper timely when we make possible mistakes. \n" ]
[ -1, 3, 5, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, 1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "ryFoAfUWz", "iclr_2018_SyqShMZRb", "iclr_2018_SyqShMZRb", "iclr_2018_SyqShMZRb", "iclr_2018_SyqShMZRb", "iclr_2018_SyqShMZRb", "rJ7ZTaYxf", "ByHD_eqxf", "SkUs6e5lG", "iclr_2018_SyqShMZRb", "iclr_2018_SyqShMZRb" ]
iclr_2018_rywDjg-RW
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
accepted-poster-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The method proposed here is highly technically sophisticated and appropriate for the problem of program synthesis from examples * The results are convincing, demonstrating that the proposed method is able to greatly speed up search in an existing synthesis system Cons: * The contribution in terms of machine learning or representation learning is minimal (mainly adding an LSTM to an existing system) * The overall system itself is quite complicated, which might raise the barrier of entry to other researchers who might want to follow the work, limiting impact In our decision, the fact that the paper significantly moves forward the state of the art in this area outweighs the concerns about lack of machine learning contribution or barrier of entry.
test
[ "SkPNib9ez", "SyFsGdSlM", "S1qCIfJWz", "H12e4JcQz", "B1rMMpYMz", "Bkq9JykMG", "HJTR0CRbG", "rJT-R0RZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.\n\nThis paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming languages community. Moreover the work mentions a neurally-guided search, but little time is spent on that portion of their contribution. I am not even clear how their system is trained.\n\nThe experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy. It is difficult to conclude overall if the technique helps in synthesis.", "The paper presents a branch-and-bound approach to learn good programs\n(consistent with data, expected to generalise well), where an LSTM is\nused to predict which branches in the search tree should lead to good\nprograms (at the leaves of the search tree). The LSTM learns from\ninputs of program spec + candidate branch (given by a grammar\nproduction rule) and ouputs of quality scores for programms. The issue\nof how greedy to be in this search is addressed.\n\nIn the authors' set up we simply assume we are given a 'ranking\nfunction' h as an input (which we treat as black-box). In practice\nthis will simply be a guess (perhaps a good educated one) on which\nprograms will perform correctly on future data. As the authors\nindicate, a more ambitious paper would consider learning h, rather\nthan assuming it as a given.\n\nThe paper has a number of positive features. It is clearly written\n(without typo or grammatical problems). The empirical evaluation\nagainst PROSE is properly done and shows the presented method working\nas hoped. This was a competent approach to an interesting (real)\nproblem. However, the 'deep learning' aspect of the paper is not\nprominent: an LSTM is used as a plug-in and that is about it. Also,\nalthough the search method chosen was reasonable, the only real\ninnovation here is to use the LSTM to learn a search heuristic.\n\n\nThe authors do not explain what \"without attention\" means.\n\n\nI think the authors should mention the existence of (logic) program\nsynthesis using inductive logic programming. There are also (closely\nrelated) methods developed by the LOPSTR (logic-based program\nsynthesis and transformation) community. Many of the ideas here are\nreminiscent of methods existing in those communities (e.g. top-down search\nwith heuristics). The use of a grammar to define the space of programs\nis similar to the \"DLAB\" formalism developed by researchers at KU\nLeuven.\n\nADDED AFTER REVISIONS/DISCUSSIONS\n\nThe revised paper has a number of improvements which had led me to give it slightly higher rating.\n\n", "This is a strong paper. It focuses on an important problem (speeding up program synthesis), it’s generally very well-written, and it features thorough evaluation. The results are impressive: the proposed system synthesizes programs from a single example that generalize better than prior state-of-the-art, and it does so ~50% faster on average.\n\nIn Appendix C, for over half of the tasks, NGDS is slower than PROSE (by up to a factor of 20, in the worst case). What types of tasks are these? In the results, you highlight a couple of specific cases where NGDS is significantly *faster* than PROSE—I would like to see some analysis of the cases were it is slower, as well. I do recognize that in all of these cases, PROSE is already quite fast (less than 1 second, often much less) so these large relative slowdowns likely don’t lead to a noticeable absolute difference in speed. Still, it would be nice to know what is going on here.\n\nOverall, this is a strong paper, and I would advocate for accepting it.\n\n\nA few more specific comments:\n\n\nPage 2, “Neural-Guided Deductive Search” paragraph: use of the word “imbibes” - while technically accurate, this use doesn’t reflect the most common usage of the word (“to drink”). I found it very jarring.\n\nThe paper is very well-written overall, but I found the introduction to be unsatisfyingly vague—it was hard for me to evaluate your “key observations” when I couldn’t quite yet tell what the system you’re proposing actually does. The paragraph about “key observation III” finally reveals some of these details—I would suggest moving this much earlier in the introduction.\n\nPage 4, “Appendix A shows the resulting search DAG” - As this is a figure accompanying a specific illustrative example, it belongs in this section, rather than forcing the reader to hunt for it in the Appendix.\n\n", "Following reviewers' feedback, we have updated the draft (Appendix C) with experiments that employ an ML-based ranking function as against the state-of-the-art ranker of PROSE that involves hand engineering. We observe that NGDS achieves ~2X speed-ups on average while still achieving highly comparable generalization accuracy as compared to PROSE with the ML-based ranker. ", "We have uploaded a new paper revision, as per the reviewers' feedback. Here's a summary of the changes:\n\n- Restructured the introduction, making NGDS details clearer and moving them earlier.\n- Added analysis of some erroneous scenarios in the Evaluation.\n- Expanded related work overview with more symbolic methods such as ILP and LOPSTR.\n- Added details of the training process, including all the hyperparameters.\n- Moved Appendix A into the main text.\n- Replaced the table in Appendix B (earlier C): we found that we selected a wrong model to generate the table in the previous submission. The summary results in Tables 1-2 and their analysis were on the correct best model (so no change was needed in the Evaluation), but the spreadsheet for detailed results in the appendix was not. We apologize for this confusion. The distribution of the speed-ups did not change substantially, although the correct spread is now from 12x to 0.2x.\n\nWe will upload one more revision later this month, in which we'll include experiments we're currently performing with an ML-learned ranking function (as opposed to the state-of-the-art PROSE ranking function, used in the current submission).", "Thank you for the related work suggestions -- we will update this discussion in the next draft. We address your concerns below: \n\n> Q: Limited innovation in terms of deep learning:\n\nRather than being a pure contribution to deep learning, this work applies deep learning to the important field of program synthesis, where statistical approaches are still underexplored. Our main contribution is a hybrid approach to program synthesis that utilizes the best of both neural and symbolic synthesis techniques. Combining insights from both worlds in this way achieves a new milestone in program synthesis performance: from a single example it generates programs that generalize better than prior state-of-the-art (including neural RobustFill, symbolic PROSE, and hybrid DeepCoder), the generated program is provably correct, and the generation is 50% faster on average\n\nDeepCoder (Balog et al., ICLR 2017) first explored a hybrid approach last year by first predicting the likelihood of various operators and then using it to guide an external symbolic synthesis engine. Since deep networks are data-hungry, Balog et al. obtain training data by randomly sampling programs from the DSL and generating satisfying random strings as input-output examples. As noted in Section 1 and as evidenced by its inferior performance against our method, the generated programs tend to be unnatural leading to poor generalization. In contrast, NGDS closely integrates neural models at each step of the synthesis and so, it is possible to obtain large amounts of training data while utilizing a relatively small number of real-world examples. \n\n> Q: Learning the ranking function instead of taking it as a given: \n\nWhile related, this problem is orthogonal to our work: a ranking function evaluates whether a given full program generalizes well, whereas we aim to predict the generalization of the best program produced from a given partial search state.\n\nImportantly, the proposed technique, NGDS is independent of the ranking function and can be trivially integrated with any high-quality ranking function. For instance, the manually written ranking function of FlashFill in PROSE that we use is a result of 7 years of engineering and heavy fine-tuning for industrial applications. An even better-quality learned ranking function would only improve the accuracy of predictions, which are already on par with baseline PROSE (68.49% vs 67.12%).\n\nIn fact, a lot of recent prior work focuses on learning a ranking function for program induction, see (Singh & Gulwani, CAV 2015) and (Ellis & Gulwani, IJCAI 2017). For comparison, we are currently performing a set of experiments with an ML-learned ranking function; we'll update with the new results once it's done.\n\n> Q: What does \"without attention\" mean?\n\nAll the models we explore encode input and output examples using (possibly multi-layered, bi-directional) LSTMs with or without an attention mechanism (Bahdanau et al., ICLR 2015). As mentioned in Section 8, the most accurate predictions arise when we attend to the input string while encoding the output string similar to the attention-based models proposed by Devlin et al., 2017. We will make this clearer in the next version of the paper. \n\nSuch an attention mechanism allows the network to learn complex features like \"whether the output is a substring of the input\". Unfortunately, such accuracy comes at a cost of increasing the network evaluation time to quadratic instead of linear. As a result, prediction time at every node of the search tree dominates the search time, and NGDS is slower than PROSE even when its predictions are accurate. Therefore, we only use LSTM models without any attention mechanism in our evaluations. \n", "> Q: Please clarify how the system is trained.\n\n1) We use the industrially collected set of 375 string transformation tasks. Each task is a single input-output examples and 2-10 unseen inputs for evaluating generalization. Further, we split the 375 tasks into 65% train, 15% validation, and 20% test ones.\n2) We run PROSE on each of those tasks and collect the (symbol, production, spec input, spec output -> best program score after learning) information on all nodes of the search tree. As mentioned in the introduction, such traces provide a rich description of the synthesis problem thanks to the Markovian nature of deductive search in PROSE and enabling the creation of large datasets required for learning deep models. As a result, we obtain a dataset of ~450,000 search outcomes from mere 375 tasks.\n3) We further split all the search outcomes by the used symbol or its depth in the grammar. In our final evaluation, we present the results for the models trained on the decisions on the `transform` (depth=1), `pp`, `pos` symbols. We have also trained other symbol models as well as a single common model for all symbols/depths, but they didn’t perform as well.\n4) We employ Adam (Kingma and Ba, 2014) to optimize the objective. We use a batch size of 32 and a learning rate of 0.01 and use early stopping to pick the final model. The model architecture and the corresponding loss function (squared error) are discussed in Section 3.1. We will add the specific training details in the next revision of the paper. \n5) As discussed in Section 3.3, the learned models are integrated in the corresponding PROSE controller when the current search tree node matches the model's conditions (i.e. it is on the same respective symbol or depth).\n\n> Q: Is the approach useful for synthesis when there is a loss in program accuracy?\n\nIn fact, NGDS achieves higher average test accuracy than baseline PROSE (68.49% vs. 67.12%), although with slightly lower validation accuracy (63.83% vs. 70.21%) which effectively corresponds to 4 tasks.\n\nHowever, this is not the most important factor: PBE is bound to often fail in synthesizing the _intended_ program from a single input-output example. Even a machine-learned ranking function picks the wrong program 20% of the time (Ellis & Gulwani, IJCAI 2017).\n\nThus, the main goal of this work is speeding up the synthesis process on difficult scenarios without sacrificing the generalization accuracy too much. As a result, we achieve on average 50% faster synthesis time, with 10x speed-ups for many difficult tasks that require multiple seconds while still retaining competitive accuracy. Appendix C shows the breakdown of time and accuracy: out of 120 validation/test tasks, there are:\n- 76 tasks where both systems are correct,\n- 7 tasks where PROSE learns a correct program and NGDS learns a wrong one,\n- 4 tasks where PROSE learns a wrong program and NGDS learns a correct one,\n- 33 tasks where both systems are wrong.", "Thank you for the constructive feedback! We’ll add more details and clarify the introduction in the next revision.\n\nQ: Which factors lead to NGDS being slower than PROSE on some tasks?\nOur method is slower than PROSE when the predictions do not satisfy the requirements of the controller i.e. all the predicted scores are within the threshold or they violate the actual scores in branch and bound exploration. This leads to NGDS evaluating the LSTM for branches that were previously pruned. This can be especially harmful when branches that got pruned out at the very beginning of the search need to be reconsidered -- as it could lead to evaluating the network many times. While evaluating the network leads to minor additions in run-time, there are many such additions, and since PROSE performance is already << 1s for such cases, this results in considerable relative slowdown.\n\nWhy do the predictions violate the controller's requirements? This happens when the neural network is either indecisive (its predicted scores for all branches are too close) or wrong (its predicted scores have exactly the opposite order of the actual program scores). \nWe will update the draft with this discussion and present some examples below\n\nSome examples:\nA) \"41.711483001709,-91.4123382568359,41.6076278686523,-91.6373901367188\" ==> \"41.711483001709\"\n\tThe intended program is a simple substring extraction. However, at depth 1, the predicted score of Concat is much higher than the predicted score of Atom, and thus we end up exploring only the Concat branch. The found Concat program is incorrect because it uses absolute position indexes and does not generalize to other similar extraction tasks with different floating-point values in the input strings.\nWe found this scenario relatively common when the output string contains punctuation - the model considers it a strong signal for Concat.\nB) \"type size = 36: Bartok.Analysis.CallGraphNode type size = 32: Bartok.Analysis.CallGraphNode CallGraphNode\" ==> \"36->32\"\n\tWe correctly explore only the Concat branch, but the slowdown happens at the level of the `pos` symbol. There are many different logics to extract the “36” and “32” substrings. NGDS explores RelativePosition branch first, but the score of the resulting program is less then the prediction for RegexPositionRelative. Thus, the B&B controller explores both branches anyway and we end up with a relative slowdown caused by the network inference time." ]
[ 6, 6, 8, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rywDjg-RW", "iclr_2018_rywDjg-RW", "iclr_2018_rywDjg-RW", "B1rMMpYMz", "iclr_2018_rywDjg-RW", "SyFsGdSlM", "SkPNib9ez", "S1qCIfJWz" ]
iclr_2018_rJl3yM-Ab
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets.
accepted-poster-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * Solid experimental results against strong baselines on a task of great interest * Method presented is appropriate for the task * Paper is presented relatively clearly, especially after revision Cons: * The paper is somewhat incremental. The basic idea of aggregating across multiple examples was presented in Kadlec et al. 2016, but the methodology here is different.
train
[ "H1pRH5def", "rJZQa3YgG", "S1OAdY3eG", "SyM0gXa7G", "rJZ1JmaQz", "S1YOCf67M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper is clear, although there are many English mistakes (that should be corrected).\nThe proposed method aggregates answers from multiple passages in the context of QA. The new method is motivated well and departs from prior work. Experiments on three datasets show the proposed method to be notably better than several baselines (although two of the baselines, GA and BiDAF, appear tremendously weak). The analysis of the results is interesting and largely convincing, although a more dedicated error analysis or discussion of the limitation of the proposed approach would be welcome.\n\nMinor point: in the description of Quasar-T, the IR model is described as lucene index. An index is not an IR model. Lucene is an IR system that implements various IR models. The terminology should be corrected here. \n", "The authors propose an approach where they aggregate, for each candidate answer, text from supporting passages. They make use of two ranking components. A strength-based re-ranker captures how often a candidate answer would be selected while a coverage-based re-ranker aims to estimate the coverage of the question by the supporting passages. Potential answers are extracted using a machine comprehension model. A bi-LSTM model is used to estimate the coverage of the question. A weighted combination of the outputs of both components generates the final ranking (using softmax). \nThis article is really well written and clearly describes the proposed scheme. Their experiments clearly indicate that the combination of the two re-ranking components outperforms raw machine comprehension approaches. The paper also provides an interesting analysis of various design issues. Finally they situate the contribution with respect to some related work pertaining to open domain QA. This paper seems to me like an interesting and significant contribution.\n", "Traditional open-domain QA systems typically have two steps: passage retrieval and aggregating answers extracted from the retrieved passages. This paper essentially follows the same paradigm, but leverages the state-of-the-art reading comprehension models for answer extraction, and develops the neural network models for the aggregating component. Although the idea seems incremental, the experimental results do seem solid. The paper is generally easy to follow, but in several places the presentation can be further improved.\n\nDetailed comments/questions:\n 1. In Sec. 2.2, the justification for adding H^{aq} and \\bar{H}^{aq} is to downweigh the impact of stop word matching. I feel this is a somewhat indirect and less effective design, if avoiding stop words is really the reason. A standard preprocessing step may be better.\n 2. In Sec. 2.3, it seems that the final score is just the sum of three individual normalized scores. It's not truly a \"weighted\" combination, where the weights are typically assumed to be tuned.\n 3. Figure 3: Connecting the dots in the two subfigures on the right does not make sense. Bar charts should be used instead.\n 4. The end of Sec. 4.2: I feel it's a bad example, as the passage does not really support the answer. The fact that \"Sesame Street\" got picked is probably just because it's more famous.\n 5. It'd be interesting to see how traditional IR answer aggregation methods perform, such as simple classifiers or heuristics by word matching (or weighted by TFIDF) and counting. This will demonstrates the true advantages of leveraging modern NN models.\n\nPros:\n 1. Updating a traditional open-domain QA approach with neural models\n 2. Experiments demonstrate solid positive results\n\nCons:\n 1. The idea seems incremental\n 2. Presentation could be improved\n", "Thank you for your feedback and thorough review. We have revised the paper to address the issues you raised and fixed the presentation issues.\n\nABOUT THE NOVELTY: \n\nAlthough traditional QA systems also have the answer re-ranking component, this paper focuses on a novel problem of ``text evidence aggregation'': Here the problem is essentially modeling the relationship between the question and multiple passages (i.e., text evidence), where different passages could enhance or complement each other. For example, the proposed neural re-ranker models the complementary scenario, i.e., whether the union of different passages could cover different facts in a question, thus the attention-based model is a natural fit.\n\nIn contrast, previous answer re-ranking research did not address the above problem: (1) traditional QA systems like (Ferrucci et al., 2010) used similar passage retrieval approach with answer candidates added to the queries. However they usually consider each passage individually for extracting features of answers, whereas we utilize the information of union/co-occurrence of multiple passages by composing them with neural networks. (2) KB-QA systems (Bast and Haussmann, 2015; Yih et al., 2015; Xu et al., 2016) sometimes use text evidence to help answer re-ranking, where the features are also extracted on the pair of a question and a single-passage but ignored the union information among multiple passages.\n\nWe have added the above discussion to our paper (Page 11).\n\nRESPONSE TO THE DETAILED QUESTIONS:\n\nQ1: In Sec. 2.2, the justification for adding H^{aq} and \\bar{H}^{aq} is to downweigh the impact of stop word matching. I feel this is a somewhat indirect and less effective design, if avoiding stop words is really the reason. A standard preprocessing step may be better.\n \nA1: We follow the model design in (Wang and Jiang 2017). The reason for adding H^{aq} and \\bar{H}^{aq} is not only to downweigh the stop word matching, but also to take into consideration the semantic information at each position. Therefore, the sentence-level matching model (Eq. (5) in the next paragraph) could potentially learn to distinguish the effects of the element-wise comparison vectors with the original lexical information. We’ve clarified this on Page 5.\n \nQ2: In Sec. 2.3, it seems that the final score is just the sum of three individual normalized scores. It's not truly a \"weighted\" combination, where the weights are typically assumed to be tuned.\n\nA2: We did tune the assigned weights for the three types of normalized scores on the dev set. The tuned version gives some improvement on dev and results in slightly better test scores, compared to simply summing up the three scores.\n \nQ3: Figure 3: Connecting the dots in the two subfigures on the right does not make sense. Bar charts should be used instead.\n \nA3: We have changed the subfigures to bar charts in the updated version.\n \nQ4: The end of Sec. 4.2: I feel it's a bad example, as the passage does not really support the answer. The fact that \"Sesame Street\" got picked is probably just because it's more famous.\n \nA4: We agree that the passages in Table 6 do not provide full evidence to the question (unlike the example in Figure 1b where the passages fully support all the facts in the question). However, the “Sesame Street” got picked not because it is more famous, but because it has supporting evidence in the form of the \"award-winning\" and \"children's TV show\" facts, while the candidate \"Great Dane\" only covers \"1969\".\n\nWe selected this example in order to show another common case of realistic problems in Open-Domain QA, where the question is complex and the top-K retrieved passages cannot provide full evidence. In this case, our model is able to select the candidate with evidence covering more facts in the question (i.e. the candidate that is more likely to be approximately correct).\n\n \nQ5: It'd be interesting to see how traditional IR answer aggregation methods perform, such as simple classifiers or heuristics by word matching (or weighted by TFIDF) and counting. This will demonstrate the true advantages of leveraging modern NN models.\n \nA5: Thank you for the valuable advice! We’ve added a baseline method with BM25 value to rerank the answers based on the aggregated passages, together with the analysis about it in the current version. In summary, the BM25 model improved the F1 scores but sometimes caused a decrease in the EM scores. This is mainly for two reasons: (1) BM25 relies on bag-of-word representation, so context information is not taken into consideration. Also it does not model the phrase similarities. (2) shorter answers are preferred by BM25. For example when answer candidate A is a subsequence of B, then according to our way of collecting pseudo passages, the pseudo passage of A is always a superset of the pseudo passage of B. Therefore F1 scores are often improved while EM declines.\n", "Thank you for your kind review. We have improved the presentation and added new discussions which we hope will further improve. ", "Thank you for your valuable comments! We corrected the grammar and spelling issues and revised the Lucene description on Page 6.\n\nWe provided additional discussion in the conclusion section. Our analysis shows that the instances which were incorrectly predicted require complex reasoning and sometimes commonsense knowledge to get right. We believe that further improvement in these areas has the potential to greatly improve performance in these difficult multi-passage reasoning scenarios. \n\nAbout baselines:\nThe two baselines, GA and BiDAF, came from the dataset papers. Besides these two, we also compared with the R^3 baseline. This method is from the recent work (Wang et al, 2017), which improves previous state-of-the-art neural-based open-domain QA system (Chen et al., 2017) on 4 out of 5 public datasets. As a result, we believe that this baseline reflects the state-of-the-art, thus our experimental comparison is reasonable.\n" ]
[ 6, 8, 6, -1, -1, -1 ]
[ 2, 3, 4, -1, -1, -1 ]
[ "iclr_2018_rJl3yM-Ab", "iclr_2018_rJl3yM-Ab", "iclr_2018_rJl3yM-Ab", "S1OAdY3eG", "rJZQa3YgG", "H1pRH5def" ]
iclr_2018_B1ZvaaeAZ
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory band- width and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN -- wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
accepted-poster-papers
This paper explores the training of CNNs which have reduced-precision activations. By widening layers, it shows less of an accuracy hit on ILSVRC-12 compared to other recent reduced-precision networks. R1 was extremely positive on the paper, impressed by its readability and the quality of comparison to previous approaches (noting that results with 2-bit activations and 4-bit weights matched FP baselines). This seems very significant to me. R1 also pointed out that the technique used the same hyperparameters as the original training scheme, improving reproducibility/accessibility. R1 asked about application to MobileNets, and the authors reported some early results showing that the technique also worked with smaller network/architectures designed for low-memory hardware. R2 was less positive on the paper, with the main criticism being that the overall technical contribution of the paper was limited. They also were concerned that the paper seemed to motivate based on reducing memory footprint, but the results were focused on reducing computation. R3 liked the simplicity of the idea and comprehensiveness of the results. Like R2, they thought the paper was limited novelty. In their response to R3, the authors defended the novelty of the paper. I tend to side with the authors that very few papers target quantization at no accuracy loss. Moreover, the paper targets training, which also receives much less attention in the model compression / reduced precision literature. Is the architecture really novel? No. But does the experimental work investigate an important tradeoff? Yes.
train
[ "S1L25x5xz", "rJ6IEcsgM", "HJiml81-z", "HJO6d1Jzz", "H1DasMA-z", "S1dpuBMWM", "BJeu11EbG", "rk7t20mWG", "rkJX68XbM", "H1ayhNMZz", "rkuEBHfWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "official_reviewer", "author", "author" ]
[ "The paper studies the effect of reduced precision weights and activations on the performance, memory and computation cost of deep networks and proposes a quantization scheme and wide filters to offset the accuracy lost due to the reduced precision. The study is performed on AlexNet, ResNet and Inception on the Imagenet datasets and results show that accuracy matching the full precision baselines can be obtained by widening the filters on the networks. \n\nPositives\n- Using lower precision activations to save memory and compute seems new and widening the filter sizes seems to recover the accuracy lost due to the lower precision.\n\nNegatives\n- While the exhaustive analysis is extremely useful the overall technical contribution of the paper that of widening the networks is fairly small. \n- The paper motivates the need for reduced precision weights from the perspective of saving memory footprint when using large batches. However, the results are more focused on compute cost. Also large batches are used mainly during training where memory is generally not a huge issue. Memory critical situations such as inference on mobile phones can be largely mitigated by using smaller batch sizes. It might help to emphasize the speed-up in compute more in the contributions. ", "This is a well-written paper with good comparisons to a number of earlier approaches. It focuses on an approach to get similar accuracy at lower precision, in addition to cutting down the compute costs. Results with 2-bit activations and 4-bit weights seem to match baseline accuracy across the models listed in the paper.\n\nOriginality\nThis seems to be first paper that consistently matches baseline results below int-8 accuracy, and shows a promising future direction.\n\nSignificance\nGoing down to below 8-bits and potentially all the way down to binary (1-bit weights and activations) is a promising direction for future hardware design. It has the potential to give good results at lower compute and more significantly in providing a lower power option, which is the biggest constraint for higher compute today. \n\nPros:\n- Positive results with low precision (4-bit, 2-bit and even 1-bit)\n- Moving the state of the art in low precision forward\n- Strong potential impact, especially on constrained power environments (but not limited to them)\n- Uses same hyperparameters as original training, making the process of using this much simpler.\n\nCons/Questions\n- They mention not quantizing the first and last layer of every network. How much does that impact the overall compute? \n- Is there a certain width where 1-bit activation and weights would match the accuracy of the baseline model? This could be interesting for low power case, even if the \"effective compute\" is larger than the baseline.\n", "This paper presents an simple and interesting idea to improve the performance for neural nets. The idea is we can reduce the precision for activations and increase the number of filters, and is able to achieve better memory usage (reduced). The paper is aiming to solve a practical problem, and has done some solid research work to validate that. In particular, this paper has also presented a indepth study on AlexNet with very comprehensive results and has validated the usefulness of this approach. \n\nIn addition, in their experiments, they have demonstrated pretty solid experimental results, on AlexNet and even deeper nets such as the state of the art Resnet. The results are convincing to me. \n\nOn the other side, the idea of this paper does not seem extremely interesting to me, especially many decisions are quite natural to me, and it looks more like a very empirical practical study. So the novelty is limited.\n\nSo overall given limited novelty but the paper presents useful results, I would recommend borderline leaning towards reject.", "On page 6 our paper says (referring to Equation 2 for k), \"When k = 1, for binary weights we use the Binary Weighted Networks (BWN) approach (Courbariaux et al., 2015) where the binarized weight value is computed based on the sign of input value followed by scaling with the mean of absolute values. For binarized activations we use the formulation in Eq. 2.\"\n\nIn terms of TensorFlow code, this is implemented as: \n m = tf.reduce_mean(tf.abs(x))\n weights = tf.sign(x) * m\n\nThe hard-clipping and quantization you mentioned in the comment is used for other values of k.\n\nIf the text is not clear in the paper, let us know and we will fix it for the final version.\n\n", "In the paper, the authors first hard constrain the values to lie within the range [−1, 1]; then quantizing activation tensor values and constrain the values to lie within the range [0, 1]. So after binarization, there are [-1, 0, 1] for weights and activations together which amounts more than 1-bit. This is different from the papers in the following where both weights and activations are constrained to +1 or 1 and potential XNOR+popcount implementation is promising. \n\n[1] XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks \n[2] Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or 1", "Thank you for the comments and reviews. They are useful to us.\n\nPlease see our response to AnonReviewer3 on the novelty aspect of our paper. Overall we believe ours is a simple technique that works and that is easier for programmers to adopt.\n\nWe will clearly articulate the speed-up in compute for the final version of the paper. Specializing the hardware (e.g. by adding compute components that implement 2-bits, 4-bits, binary, 8bits, etc.) would definitely speed-up inference times. Our ASIC and FPGA evaluations (Section-5.1) are an attempt to highlight this aspect. Current hardware platforms are not optimized for 2-bits and 4-bits. One other aspect of lowering memory footprint is that the working set size of the workload starts to fit on chip and by lowering accesses to DRAM memory, the compute core starts to see better performance and energy savings (DRAM accesses are expensive in latency and energy).\n", "This is a good hardware-software co-design problem. 2-bit operands require simple hardware circuitry whereas int8, fp16 and fp32 require hardware multipliers. So, with wider or narrower models there should be a pareto curve for accuracy vs. model size vs. precision vs. hardware efficiency (performance, area, power) of the compute units at the precision used.", "We have an implementation (which is somewhat, slightly, crappy) of MobileNet that reaches Top-1 accuracy of 64.87% with .46 Giga-FLOPs (compared to 67.4% Top-1 accuracy which the official model reaches: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). We dont know the reason for this accuracy discrepancy and are debugging our implementation; because of this accuracy discrepancy we did not report WRPN results on MobileNet.\n\nNevertheless, this model with 2-bits weight and 4-bits activation gives Top-1 accuracy of 54.24% (i.e. a loss of 10.6%). With 2-bits weight and 4-bits activation and making the model 2x-wider, we get 64.66% Top-1 accuracy (a loss of 0.21% from our baseline). This also required the first layer to be widened. We think increasing by 2.1x or so should completely recover the accuracy.\nSo, we think WRPN works for smaller networks.\n\nHowever, the more compact the model the harder it is to quantize to very low precision -- i.e. the accuracy loss is big with quantization to ternary or 4-bits precision. One interesting study we found is https://arxiv.org/pdf/1710.01878.pdf which shows a large sparse model is better than a small compact model. In the same theme, we believe a (somewhat) large quantized model is better than a small full-precision model. ", "Thanks for the responses!\n\nOne question that comes up in most compression situations is whether the technique works with smaller networks/architectures e.g. MobileNets (https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html) that have already been somewhat optimized for mobile like targets. Since MobileNets also trade-off compute vs accuracy with a focus on compute can they still be compressed past 8-bits?\n", "Thank you for the comments. We defend the novelty aspect of this paper in our response below.\n\nNovelty:\nOur paper targets quantization at no accuracy loss. We target network training on reduced precision hardware (H/W) considering a system-wide approach - system and on-chip memory footprint of activations is much more than weights.\nFor cloud-based inference deployments (where large batch-size is typical) and during training, reducing precision of activations speeds up end-to-end runtime much more than reducing the precision of weights (Fig. 1). \nHowever, as our paper shows, reducing activation precision hurts accuracy much more than reducing weight precision. No prior work targets this aspect.\n\nMost prior works on reduced precision DNNs sacrifice accuracy and many prior works target reducing precision of just the weights. We show that there is no tradeoff in reducing precision - even during training - one can get the same accuracy as baseline by making the networks wider (yes, more raw compute operations, but still the compute cost is lower than baseline).\n\n\n1. We believe lowering precision is one aspect (which is widely studied in literature) but it is important to lower precision without any loss in accuracy - no prior work has shown reduced-precision network (4-bits, 2-bits) training and inference without sacrificing accuracy. \nAlso, our results with binary networks are state-of-the art and close the gap significantly between binary and 32b precision (e.g. less than 1.2% for ResNet-34).\n\n\n2. We believe widening networks is a simple technique (which works) that is easy for programmers to experiment with for recovering accuracy with reduced precision. With WRPN: (a) model-size is smaller and, (b) run time and energy for end-to-end inference as well as training is lower than 32b networks.\n\nWith widening, the number of neurons in a layer increase. Yet with reduced precision, we control overfitting and regularization. We believe, this aspect has not been studied before.", "Thank you for the comments and review.\n\nEffect on compute of not quantizing first layer and last:\nThe total number of FMA operations in first and last layer is ~3% for ResNet-34 (and 1.5% for ResNet-50). So the effect on overall compute is smaller for these layers if not negligible. In our work, the first layer and last layer's weights and activations are not quantized and neither are these layers' width increased.\n\nFor the first and last layer, we find, we can quantize the weights to 8-bits (at most) without much loss in accuracy compared to keeping them at full-precision (~0.2% additional accuracy loss) while quantizing the other layers to 4bits activations and 2-bits weight. So, in theory we can use integer compute for these layers if not 2-bits and 4-bits precision to speed up compute.\n\nThe primary reason we did not quantize the first and last layer is because - we wanted to fairly compare against prior proposals - the works we compared against in the paper do not quantize these layers.\n\nAt what widening factor does 1-bit come at-par with baseline full-precision?\nOur very preliminary results tell us that this could probably happen at 3.5x-4x widening. \nWe run into experimental evaluation issues when doing these experiments -- making the layers wider blows up the device memory requirements (since we \"emulate\" the binary and other low-precision knobs with FP32 precision in GPUs). We are working on performing these experiments with distributed TensorFlow set-up. The other aspect is to lower the batch-size and still use a single node set-up but we have to change the learning rates then." ]
[ 5, 9, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1ZvaaeAZ", "iclr_2018_B1ZvaaeAZ", "iclr_2018_B1ZvaaeAZ", "H1DasMA-z", "iclr_2018_B1ZvaaeAZ", "S1L25x5xz", "rk7t20mWG", "rkJX68XbM", "rkuEBHfWM", "HJiml81-z", "rJ6IEcsgM" ]
iclr_2018_rkmu5b0a-
MGAN: Training Generative Adversarial Nets with Multiple Generators
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.
accepted-poster-papers
This paper presents an analysis of using multiple generators in a GAN setup, to address the mode-collapse problem. R1 was generally positive about the paper, raising the concern on how to choose the number of generators, and also whether parameter sharing was essential. The authors reported back on parameter sharing, showing its benefits yet did not have any principled method of selecting the number of generators. R2 was less positive about the paper, pointing out that mixture GANs and multiple generators have been tried before. They also raised concern with the (flawed) Inception score as the basis for comparison. R2 also pointed out that fixing the mixing proportions to uniform was an unrealistic assumption. The authors responded to these claims, clarifying the differences between this paper and the previous mixture GAN/multiple generator papers, and reporting FID scores. R3 was generally positive, also citing some novelty concerns similar to that of R2. I acknowledge the authors detailed responses to the reviews (in particular in response to R2) and I believe that the majority of concerns expressed have now been addressed. I also encourage the authors to include the FID scores in the final version of the paper.
train
[ "Sy9Uo3Ygz", "rynmx_XHf", "ByiOfTVNz", "rJAgO6KlM", "Hkib3t2lz", "HkXND5MMM", "BkNAv9fMz", "HkTPt5zMM", "ByShYqMMM", "BJUxcqfzG", "HJGWrcGMG", "rycjHcGMz", "r117lsfGG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The present manuscript attempts to address the problem of mode collapse in GANs using a constrained mixture distribution for the generator, and an auxiliary classifier which predicts the source mixture component, plus a loss term which encourages diversity amongst components.\n\nAll told the proposed method is quite incremental, as mixture GANs/multi-generators have been done before. The Inception scores are good but it's widely known now that Inception scores are a deeply flawed measure, and presenting it as the only quantitative measure in a manuscript which makes strong claims about mode collapse unfortunately will not suffice. If the generator were to generate one template per class for which the Inception network's p(y|x) had low entropy, the Inception score would be quite high even though the model had only memorized one image per class. For claims surrounding mode collapse in particular, evaluation against a parameter count matched baseline using the AIS log likelihood estimation procedure in Wu et al (2017) would be the gold standard. Frechet Inception distance has also been proposed which at least has some favourable properties relative to Inception score.\n\nThe mixing proportions are fixed to the uniform distribution, and therefore this method also makes the unrealistic assumption that modes are equiprobable and require an equal amount of modeling capacity. This seems quite dubious.\n\nFinally, their own qualitative results indicate that they've simply moved the problem, with clear evidence of mode collapse in one of their mixture components in figure 5c, 4th row from the bottom. Indeed, this does nothing to address the problem of mode collapse in general, as there is nothing preventing individual mixture component GANs from collapsing.\n\nUncited prior work includes Generative Adversarial Parallelization of Im et al (2016). Also, if I'm not mistaken this is quite similar to an AC-GAN, where the classes are instead randomly assigned and the generator conditioning is done in a certain way; namely the first layer activations are the sum of K embeddings which are gated by the active mixture component. More discussion of this would be warranted.\n\nOther notes:\n- The introduction contains no discussion of the ill-posedness of the GAN game as it is played in practice.\n- \"As a result, the optimization order in 1 can be reversed\" this does not accurately characterize the source of the issues, see, e.g. Goodfellow (2015) \"On distinguishability criteria...\".\n- Section 3: the second last sentence of the third paragraph is vague and doesn't really say anything. Of course parameter sharing leverages common information. How does this help to train the model effectively?\n- Section 3: Since JSD is defined between two distributions, it is not clear what JSD_pi(P_G1, P_G2, ...) refers to. The last line of the proof of theorem 2 leaps to calling this term a Jensen-Shannon divergence but it's not clear what the steps are; it looks like a regular KL divergence to me.\n- Section 3: Also, is the classifier being trained to maximize this divergence or just the generator? I assume the latter.\n- The proof of Theorem 3 makes unrealistic assumptions that we know the number of components a priori as well as their mixing proportions (pi).\n- \"... which further minimizes the objective value\" -- it minimizes a term that you introduced which is constant with respect to your learnable parameters. This is not a selling point, and I'm not sure why you bothered mentioning it.\n- There's no mention of the substitution of log (1 - D(x)) for -log(D(x)) and its effect on the interpretation as a Jensen-Shannon divergence (which I'm not sure was quite right in the first place)\n- Section 4: does the DAE introduced in DFM really introduce that much of a computational burden? \n- \"Symmetric Kullback Liebler divergence\" is not a well-known measure. The standard KL is asymmetric. Please define it.\n- Figure 2 is illegible in grayscale.\n- Improved-GAN score in Table 1 is misleading, as this was their no-label baseline. It's fine to include it but indicate it as such.\n\nUpdate: many of my concerns were adequately addressed, however I still feel that calling this an avenue to \"overcome mode collapse\" is misleading. This seems aimed at improving coverage of the support of the data distribution; test log likelihood bounds via AIS (there are GAN baselines for MNIST in the Wu et al manuscript I mentioned) would have been more compelling quantitative evidence. I've raised my score to a 5.", "We gratefully thank the reviewer for the insightful response!\n\nComment 1: If we view this as graphical model where hidden units and the output G(z) are random variables, then the implementation in the paper can be seen as a multimodal prior. However, hidden units and the output G(z) are learned deterministic functions of z, so each G_k(z) implies a different distribution. Therefore, it is still appropriate to see the implementation as a mixture.\n\nComment 4: We will follow your suggestion and add the experiment result to the paper.", "Thanks for your reply.\n\nComment 1: if we take z standard gaussian. Let (W_j,b_j) be the first Layer that is untied. we have W_jz remains gaussian with mean b_j and covariance W_jW_j^{\\top}. We can think of W_jz as a multimodal prior that feed to the shared generator. so the implementation given in the paper is indeed a multimodal prior (degenerate multivariate gaussians in R^{8192}). It is true that this not standard multimodal prior in low dimension, but since the gaussians in R^{8192} are degenerate, they are still supported on a low dimensional subspace. \n\nComment 4: The experiment on untying the weights and the effect of regularization of having smaller bottleneck is interesting, and maybe worth adding to the paper. \n\n", "Summary:\n\nThe paper proposes a mixture of generators to train GANs. The generators used have tied weights except the first layer that maps the random codes is generator specific, hence no extra computational cost is added.\n\n\nQuality/clarity:\n\nThe paper is well written and easy to follow.\n\nclarity: The appendix states how the weight tying is done , not the main paper, which might confuse the reader, would be better to state this weight tying that keeps the first layer free in the main text.\n\nOriginality:\n\n Using multiple generators for GAN training has been proposed in many previous work that are cited in the paper, the difference in this paper is in weight tying between generators of the mixture, the first layer is kept free for each generator.\n\nGeneral review:\n\n- when only the first layer is free between generators, I think it is not suitable to talk about multiple generators, but rather it is just a multimodal prior on the z, in this case z is a mixture of Gaussians with learned covariances (the weights of the first layer). This angle should be stressed in the paper, it is in fine, *one generator* with a multimodal learned prior on z!\n\n- Taking the multimodal z further , can you try adding a mean to be learned, together with the covariances also? see if this also helps? \n \n- in the tied weight case, in the synthetic example, can you show what each \"generator\" of the mixture learn? are they really learning modes of the data? \n\n- the theory is for general untied generators, can you comment on the tied case? I don't think the theory is any more valid, for this case, because again your implementation is one generator with a multimodal z prior. would be good to have some experiments and see how much we loose for example in term of inception scores, between tied and untied weights of generators.\n", "MGAN aims to overcome model collapsing problem by mixture generators. Compare to traditional GAN, there is a classifier added to minimax formulation. In training, MGAN is optimized towards minimizing the Jensen-Shannon Divergence between mixture distributions from generator and data distribution. The author also present that using MGAN to achive state-of-art results.\n\nThe paper is easy to follow.\n\nComment:\n\n1. Seems there still no principle to choose correct number of generators but try different setting. Although most parameters of generators are shared, the result various.\n2. Parameter sharing seems is a trick in MGAN model. Could you provide experiment results w/o parameter sharing.\n\n", "**** Note 1: The introduction contains no discussion of the ill-posedness of the GAN game as it is played in practice.\n\n==== Answer: We do not understand exactly what you meant by ill-posedness. Can please you further clarify this note? \n\n**** Note 2: \"As a result, the optimization order in 1 can be reversed\" this does not accurately characterize the source of the issues, see, e.g. Goodfellow (2015) \"On distinguishability criteria...\".\n\n==== Answer: Here, we simply mentioned the issue discussed in The GAN tutorial (Goodfellow, 2016): “Simultaneous gradient descent does not clearly privilege min max over max min or vice versa. We use it in the hope that it will behave like min max but it often behaves like max min.”\n\n**** Note 3: Section 3: the second last sentence of the third paragraph is vague and doesn't really say anything. Of course parameter sharing leverages common informaNtion. How does this help to train the model effectively?\n\n==== Answer: We discussed in Section 5.2, Model Architectures that our experiment showed that when the parameters are not tied between the classifier and discriminator, the model learns slowly and eventually yields lower performance.\n\n**** Note 4: Section 3: Since JSD is defined between two distributions, it is not clear what JSD_pi(P_G1, P_G2, ...) refers to. The last line of the proof of theorem 2 leaps to calling this term a Jensen-Shannon divergence but it's not clear what the steps are; it looks like a regular KL divergence to me.\n\n==== Answer: The general definition of JSD is:\nJSD_pi(P_1, P_2, …P_n) = H(sum_{i=1..n} (pi_i * P_i)) - sum_{i=1..n}(pi_i * H(P_i)\nWhere H(P) is the Shannon entropy for distribution P. Due to limited space, we showed more details of the derivation of L(G_1:K) in Appendix B.\n\n**** Note 5: Section 3: Also, is the classifier being trained to maximize this divergence or just the generator? I assume the latter.\n\n==== Answer: It is the latter. Based on Eq. 2, the classifier is trained to minimize its softmax loss, and based on the optimal solution for the classifier, the generators, by minimizing their objective function, will maximize the JSD divergence.\n\n**** Note 6: The proof of Theorem 3 makes unrealistic assumptions that we know the number of components a priori as well as their mixing proportions (pi). - \"... which further minimizes the objective value\" – it minimizes a term that you introduced which is constant with respect to your learnable parameters. This is not a selling point, and I'm not sure why you bothered mentioning it.\n\n==== Answer: Please refer to our answer to comment 3.\n\n**** Note 7: There's no mention of the substitution of log (1 - D(x)) for -log(D(x)) and its effect on the interpretation as a Jensen-Shannon divergence (which I'm not sure was quite right in the first place)\n\n==== Answer: We said in the end of Section 3: “In addition, we adopt the non-saturating heuristic proposed in (Goodfellow et al., 2014) to train G_{1:K} by maximizing log D(G_k (z)) instead of minimizing log D(1 - G_k (z)).”\n\n**** Note 8: Section 4: does the DAE introduced in DFM really introduce that much of a computational burden?\n\n==== Answer: It was stated in Section 5.3, paragraph 2 in (Warde-Farley & Bengio, 2017) that: “we achieve a higher Inception score using denoising feature matching, using denoiser with 10 hidden layers of 2,048 rectified linear units each.” That means the DAE adds more than 40 million parameters.\n\n**** Note 9: “Symmetric Kullback Liebler divergence” is not a well-known measure. The standard KL is asymmetric. Please define it. - Figure 2 is illegible in grayscale.\n\n==== Answer: Symmetric Kullback Liebler is the average of the KL and reverse KL divergence. As per your suggestion, we will define it in the paper. Regarding Figure 2, we tried different shapes for the real and generated data points, but due the small size if figure, they are just clusters of red and blue points. We will try different approaches to make the figure more legible.\n\n**** Note 10: Improved-GAN score in Table 1 is misleading, as this was their no-label baseline. It's fine to include it but indicate it as such.\n\n==== Answer: We will take your advice and make it clear that Improve-GAN score in Table 1 is for the unsupervised version.", "**** Comment 4: Finally, their own qualitative results indicate that they've simply moved the problem, with clear evidence of mode collapse in one of their mixture components in figure 5c, 4th row from the bottom. Indeed, this does nothing to address the problem of mode collapse in general, as there is nothing preventing individual mixture component GANs from collapsing.\n\n==== Answer: If we look carefully at samples shown in previously published papers (such as Figure 4 of the Improved GAN paper that showed samples generated by semi-supervised GAN trained on CIFAR-10 with feature matching), there are often broken samples that look similar.\n\nSolving mode collapse for a single-generator GAN is out of scope of this paper. As discussed in Introduction, we acknowledged the challenges of training a single generator, and therefore we took the multi-generator approach. We did not seek to improve within-generator diversity but instead improve among-generator diversity. The intuition is that GAN can be pretty good for narrow-domain datasets, so if a group of generators learns to partition the data space, and each of them focuses on a region of the data space, then they together can do a good job too. Finally, the use of a classifier to enforce divergence among generators makes our method relatively easy to integrate with other single-generator models that achieved improvement regarding the mode collapsing problem.\n\n**** Comment 5: Uncited prior work includes Generative Adversarial Parallelization of Im et al (2016). Also, if I'm not mistaken this is quite similar to an AC-GAN, where the classes are instead randomly assigned and the generator conditioning is done in a certain way; namely the first layer activations are the sum of K embeddings which are gated by the active mixture component. More discussion of this would be warranted.\n\n==== Answer: Generative Adversarial Parallelization (GAP) trains many pairs of GAN, periodically swap the discriminators (generators) randomly, and finally selects the best GAN based on GAM evaluation. When we discussed methods in the multi-generator approach, we focused on mixture GAN and as a result neglected GAP. It is fair to discuss GAP as an approach to reduce the mode collapsing problem.\n\nIn AC-GAN, the label information and the noise are concatenated and then fed into the generator network. In our model, generators have different weights in the first layer, so they are mapped to the first hidden layer differently. MGAN and AC-GAN both add the log-likelihood of the correct class to the objective function, but the motivation is very different. Our idea started by asking how to force generators to generate different data, while AC-GAN's motivation is to leverage the label information from training data. So, the two works are totally independent and happens to share some similarities. Our paper focuses on unsupervised GAN, so we did not discuss semi-supervised methods.", "**** Comment 3: The mixing proportions are fixed to the uniform distribution, and therefore this method also makes the unrealistic assumption that modes are equiprobable and require an equal amount of modeling capacity. This seems quite dubious.\n\n**** Note 6: The proof of Theorem 3 makes unrealistic assumptions that we know the number of components a priori as well as their mixing proportions (pi). - \"... which further minimizes the objective value\" – it minimizes a term that you introduced which is constant with respect to your learnable parameters. This is not a selling point, and I'm not sure why you bothered mentioning it.\n\n==== Answer: Our theorem 3 shows that by means of maximizing the divergence among the generated distributions p_G_k ( ⋅ ) , in an ideal case, our proposed MGAN can recover the true data distribution wherein each p_G_k describes a mixture component in this data distribution. Although this theorem gives more insightful understanding of our MGAN as well as its behavior, it requires a strict setting wherein we need to specify the number of mixtures and the mixing proportions a priori. Stating this theorem, we want to emphasize that maximizing the divergence among the generated distributions p_G_k is an efficient way to encourage the generators to produce diverse data that can occupy multiple modes in the real data. Moreover, since GAN requires training a single generator that can cover multiple data modes, it is much harder to train, and always ends up with missing of data modes. In contrast, our MGAN aims at training each generator to cover one or a few data modes, hence being easier to train, and reducing the missed data modes. In addition, due to the fact that each generator can cover some data modes, the number of generators K can be less than the number of data modes as shown in Figure 6 wherein samples generated from 3 or 4 generators can well cover a mixture of 8 Gaussians.\n\nGiven the fact that we are learning from the empirical data distribution, we develop a further theorem to clarify that if we wish to learn the mixing proportion π, the optimal solution is the uniform distribution. The idea is that the optimal generators will learn to partition the empirical data into K disjoint sets of roughly equal size, and each generator approximates a set. In addition, due to the fact that the discrete distribution p_A_k is well-approximated by a continuous generator G_k, the data points in each A_k occupies several groups or clusters. Again, Figure 6 illustrates this point. In Figure 6b, each of the 2 generators (yellow and blue) covers 4 modes. In Figure 6c, one generator (dark green) covers 2 modes and the other two generators (yellow and blue) covers 3 modes. In Figure 6d, each of the four generators (yellow, blue, dark green and dodger blue) cover 2 modes.\n\nFor details of our theorem, please refer to this link: https://app.box.com/s/jjr5kt69uxbr0aikrm0d9cdp2jj95wa0", "**** Comment 2: The Inception scores are good but it's widely known now that Inception scores are a deeply flawed measure, and presenting it as the only quantitative measure in a manuscript which makes strong claims about mode collapse unfortunately will not suffice. If the generator were to generate one template per class for which the Inception network's p(y|x) had low entropy, the Inception score would be quite high even though the model had only memorized one image per class. For claims surrounding mode collapse in particular, evaluation against a parameter count matched baseline using the AIS log likelihood estimation procedure in Wu et al (2017) would be the gold standard. Frechet Inception distance has also been proposed which at least has some favourable properties relative to Inception score.\n\n==== Answer: We chose Inception Score because at the time we set up our experiment, it was the most widely accepted metrics, so it would be easier for us to compare with many baselines. We did acknowledge that any quantitative metric has its weakness and Inception Score is no exception. Therefore, we included a lot of samples in the paper and looked at them from different angle. It can be noticed that our samples, in terms of quality, are far better than those shown in previously published papers. In addition, we looked at samples generated by each of the generators to check whether they trap Inception Score by memorizing a few examples from each class. We saw no sign of trapping as samples generated by each generator were diverse, especially on diverse datasets such as STL-10 or ImageNet. Therefore, we believe that our method achieved higher Inception Score than single-GAN methods not because it trapped the score, but because each of the generators learned to model a different subset of the training data. As a result, our generated samples are more diverse and at the same time more visually appealing. For the mentioned reasons, we strongly believe the use of Inception Score in our experiment to evaluate our proposed method is valid and plausible.\n\nAs per your suggestion, we looked for GAN baselines using the AIS loglikelihood, but we found no GAN baseline. Regarding Frechet Inception (FID) distance, our model got an FID of 26.7 for Cifar-10. Some baselines we collected from (Heusel et al., 2017) are 37.7 for the original DCGAN, 36.9 for DCGAN using Two Time-scale Update rule (DCGAN + TTUR), 29.3 for WGAN-GP (Gulrajani, 2017) FID of 29.3, and 24.8 for WGAN-GP using TTUR. It is noteworthy that lower FID is better, and that the base model for MGAN is DCGAN. Therefore, in terms of FID, MGAN (26.7) is 28% better than DCGAN (37.7) and DCGAN using TTUR (36.9) and is 9% better than WGAN-GP (29.3), which uses ResNet architecture. This example further shows evidence that our proposed method helps to address the mode collapsing problem.", "We gratefully thank the reviewer for the detailed and valuable comments and notes. It took us a while to thoughtfully answer all the comments, and the following are our answers. Due to the limited number of characters per comment, we will answer in several posts:\n\n**** Comment 1: All told the proposed method is quite incremental, as mixture GANs/multi-generators have been done before.\n\n==== Answer: As discussed in related work, there are previous attempts following the multi-generators approach, but they are different from our proposed method. Mix+GAN is totally different as it's based on the min-max theorem and set up mixed strategies for both generators and discriminators. AdaGAN train generators sequentially in a manner similar to AdaBoost, thus having some disadvantages as we discussed. MAD-GAN, at a first glance, looks somewhat similar to our proposed method in terms of model design, but there are some key differences. First, it uses a multi-class discriminator, which outputs D_k(x) as the probability that x generated by G_k for k = 1, 2, … K, and D_{K+1}(x) as the probability that x came from the training data. The gradient signal for each generator k comes from the loss function E_{x~p_G_k}[log (1 - D_{k+1}(x)], which is similar to that in a standard GAN. So, it might be vulnerable to the issue discussed in the Improved GAN paper: “Because the discriminator processes each example independently, there is no coordination between its gradients, and thus no mechanism to tell the outputs of the generator to become more dissimilar to each other.” Our proposed method is distinguished in the use of a classifier to enforce JSD divergence among generators. In addition, the use of a separate classifier makes our method easier to integrate with other single-generator GAN models. There is also extension to our method that do not apply to MAD-GAN. We can use the classifier to cluster the train data, and then further train each generator in a different cluster.\n\nIn terms of performance, our method is far superior than Mix+GAN both in terms of Inception Scores and sample quality. The AdaGAN only presents experiment on MNIST. MAD-GAN mostly performed experiment on narrow-domain datasets, and they did not report any quantitative data on diverse datasets and did not release code as well.", "We gratefully thank reviewers for the insightful comments. We have endeavored to address as much as we can, including running additional experiments as suggested, thus it has taken us a while.\n\n**** Comment 1: Seems there still no principle to choose correct number of generators but try different setting. Although most parameters of generators are shared, the result various.\n\n==== Answer: We agree that we don’t have any principle to choose the correct number of generators for our proposed model, as choosing the correct number of clusters for Gaussian mixture model (GMM) and other clustering methods. If we wish to specify an appropriate number of generators automatically, we would need to go for a Bayesian nonparametric extension, similarly to going from GMM to Dirichlet Process Mixtures. Within the scope of this work, our motivation is that GAN works pretty well on narrow-domain dataset but poorly on diverse dataset; So, if we can efficiently train many generators while enforcing divergence among them, they can work well too. In general, more generators tend to work better.\n\n**** Comment 2: Parameter sharing seems is a trick in MGAN model. Could you provide experiment results w/o parameter sharing.\n\n==== Answer: We did experiment without parameters sharing among generators and found an interesting behavior. When we trained 4 generators without parameter sharing and each generator has 128 feature maps in the penultimate layer, the model failed to learn. The model even failed to learn when we set beta to 0. When we reduced the number of feature maps in the penultimate layer for each generator to 32, they managed to learn and achieved an Inception Score of 7.42. So, we hypothesize that added benefit of parameter sharing is to help balance the capacity of generators and that of the discriminator/classifier.\n", "We gratefully thank the reviewer for the thoughtful and insightful comments. It took us a while to answer all the reviews as well as to run additional experiments as suggested. Our answers are the following:\n\n**** Comment 1: when only the first layer is free between generators, I think it is not suitable to talk about multiple generators, but rather it is just a multimodal prior on the z, in this case z is a mixture of Gaussians with learned covariances (the weights of the first layer). This angle should be stressed in the paper, it is in fine, *one generator* with a multimodal learned prior on z!\n\n==== Answer: The first hidden layer actually has 4x4x512 = 8,192 dimensions (for Cifar-10). So, untying weights in the first layer effectively maps the noise prior to a different distribution in R^8192 (with a different mean and covariances) for each generator. So, our proposed method is different from a GAN with a multimodal prior.\n\n**** Comment 2: taking the multimodal z further , can you try adding a mean to be learned, together with the covariances also? see if this also helps?\n\n==== Answer: We tried to learn the mean and covariance of the prior for each generator, but the result was not much different from the standard GAN.\n\n**** Comment 3: in the tied weight case, in the synthetic example, can you show what each \"generator\" of the mixture learn? are they really learning modes of the data?\n\n==== Answer: Following your suggestion, we revised figure 6 so that data points generated by different generators have different colors. As you can see, generators learned different modes of the data.\n\n**** Comment 4: the theory is for general untied generators, can you comment on the tied case? I don't think the theory is any more valid, for this case, because again your implementation is one generator with a multimodal z prior. would be good to have some experiments and see how much we loose for example in term of inception scores, between tied and untied weights of generators.\n\n==== Answer: In theory, tying weights will add constraints to the optimization of the objective function for G_{1:K} in Eq. 4. For example, if we tie weights in all layers and generators differ only in the mean and variance of the noise prior, the result was similar to the standard GAN like we reported in comment 2. Untying weights in the first layer, however, achieved good results like we discussed in the paper. Finally, as per your request, we conducted experiments without parameter sharing. Surprisingly, when we trained 4 generators without parameter sharing and each generator has 128 feature maps in the penultimate layer, the model failed to learn. The model even failed to learn when we set beta to 0. When we reduced the number of feature maps in the penultimate layer for each generator to 32, they managed to learn and achieved an Inception Score of 7.42. So, we hypothesize that added benefit of our parameter sharing scheme is to balance the capacity of generators and that of the discriminator/classifier.", "A revision has been posted with some minor changes. We added the definition of symmetric Kullback-Leibler in Section 5.1, clarified in Table 1's caption that all models in the table are trained in a unsupervised manner, and changed the Figure 6 so that data generated by each generator have a different color." ]
[ 5, -1, -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkmu5b0a-", "ByiOfTVNz", "rycjHcGMz", "iclr_2018_rkmu5b0a-", "iclr_2018_rkmu5b0a-", "Sy9Uo3Ygz", "Sy9Uo3Ygz", "Sy9Uo3Ygz", "Sy9Uo3Ygz", "Sy9Uo3Ygz", "Hkib3t2lz", "rJAgO6KlM", "iclr_2018_rkmu5b0a-" ]
iclr_2018_rkHVZWZAZ
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
accepted-poster-papers
This paper presents a nice set of results on a new RL algorithm. The main downside is the limitation to the Atari domain, but otherwise the ablation studies are nice and the results are strong.
test
[ "HJH_9xLNG", "SJRs56Ylz", "r1MU1AtlG", "rksMwz9xG", "SyQtte4NM", "B1W_WMpQz", "SJtRa_WMM", "HJgzROWzf", "SJqjT_ZGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks to the authors for their response. As I mentioned in the initial review, I think the method is definitely promising and provides improvements. My comments were more on claims like \"Reactor significantly outperforms Rainbow\" which is not evident from the results in the paper (a point also noted by Reviewer 3). These claims could be made more specific, with appropriate caveats, or additional experiments could be performed to help substantiate the claims better. ", "This paper proposes a novel reinforcement learning algorithm (« The Reactor ») based on the combination of several improvements to DQN: a distributional version of Retrace, a policy gradient update rule called beta-LOO aiming at variance reduction, a variant of prioritized experience replay for sequences, and a parallel training architecture. Experiments on Atari games show a significant improvement over prioritized dueling networks in particular, and competitive performance compared to Rainbow, at a fraction of the training time.\n\nThere are definitely several interesting and meaningful contributions in this submission, and I like the motivations behind them. They are not groundbreaking (essentially extending existing techniques) but are still very relevant to current RL research.\n\nUnfortunately I also see it as a step back in terms of comparison to other algorithms. The recent Rainbow paper finally established a long overdue clear benchmark on Atari. We have seen with the « Deep Reinforcement Learning that Matters » paper how important (and difficult) it is to properly compare algorithms on deep RL problems. I assume that this submission was mostly written before Rainbow came out, and that comparisons to Rainbow were hastily added just before the ICLR deadline: this would explain why they are quite limited, but in my opinion it remains a major issue, which is the main reason why I am advocating for rejection.\n\nMore precisely, focusing on the comparison to Rainbow which is the main competitor here, my concerns are the following:\n- There is almost no discussion on the differences between Reactor and Rainbow (actually the paper lacks a « related work » section). In particular Rainbow also uses a version of distributional multi-step, which as far as I can tell may not be as well motivated (from a mathematical point of view) as the one in this submission (since it does not correct for the « off-policyness » of the replay data), but still seems to work well on Atari.\n- Rainbow is not distributed. This was a deliberate choice by its authors to focus on algorithmic comparisons. However, it seems to me that it could benefit from a parallel training scheme like Reactor’s. I believe a comparison between Reactor and Rainbow needs to either have them both parallelized or none of them (especially for a comparison on time efficiency like in Fig. 2)\n- Rainbow uses the traditional feedforward DQN architecture while Reactor uses a recurrent network. It is not clear to which extent this has an impact on the results.\n- Rainbow was stopped at 200M steps, at which point it seems to be overall superior to Reactor at 200M steps. The results as presented here emphasize the superiority of Reactor at 500M steps, but a proper comparison would require Rainbow results at 500M steps as well.\n\nIn addition, although I found most of the paper to be clear enough, some parts were confusing to me, in particular:\n- « multi-step distributional Bellman operator » in 3.2: not clear exactly what the target distribution is. If I understand correctly this is the same as the Rainbow extension, but this link is not mentioned.\n- 3.4.1 (network architecture): a simple diagram in the appendix would make it much easier to understand (Table 3 is still hard to read because it is not clear which layers are connected together)\n- 3.3 (prioritized sequence replay): again a visual illustration of the partitioning scheme would in my opinion help clarify the approach\n\nA few minor points to conclude:\n- In eq. 6, 7 and the rest of this section, A does not depend (directly) on theta so it should probably be removed to avoid confusion. Note also that using the letter A may not be best since A is used to denote an action in 3.1.\n- In 3.1: « Let us assume that for the chosen action A we have access to an estimate R(A) of Qπ(A) » => « unbiased estimate »\n- In last equation of p.5 it is not clear what q_i^n is\n- There is a lambda missing on p.6 in the equation showing that alphas are non-negative on average, just before the min\n- In the equation above eq. 12 there is a sum over « i=1 »\n- That same equation ends with some h_z_i that are not defined\n- In Fig. 2 (left) for Reactor we see one worker using large batches and another one using many threads. This is confusing.\n- 3.3 mentions sequences of length 32 but 3.4 says length 33.\n- 3.3 says tree operations are in O(n ln(n)) but it should be O(ln(n))\n- At very end of 3.3 it is not clear what « total variation » is.\n- In 3.4 please specify the frequency at which the learner thread downloads shared parameters and uploads updates\n- Caption of Fig. 3 talks about « changing the number of workers » for the left plot while it is in the right plot\n- The explanation on what the variants of Reactor (ND and 500M) mean comes after results are shown in Fig. 2.\n- Section 4 starts with Fig. 3 without explaining what the task is, how performance is measured, etc. It also claims that Distributional Retrace helps while this is not the case in Fig. 3 (I realize it is explained afterwards, but it is confusing when reading the sentence « We can also see... »). Finally it says priorization is the most important component while the beta-LOO ablation seems to perform just the same.\n- Footnote 3 should say it is 200M observations except for Reactor 500M\n- End of 4.1: « The algorithms that we compare Reactor against are » => missing ACER, A3C and Rainbow\n- There are two references for « Sample efficient actor-critic with experience replay »\n- I do not see the added benefit of the Elo computation. It seems to convey essentially the same information as average rank.\n\nAnd a few typos:\n- Just above 2.1.3: « increasing » => increasingly\n- In 3.1: « where V is a baseline that depend » => depends\n- p.7: « hight » => high, and « to all other sequences » => of all other sequences\n- Double parentheses in Bellemare citation at beginning of section 4\n- Several typos in appendix (too many to list)\n\nNote: I did not have time to carefully read Appendix 6.3 (contextual priority tree)\n\nEdit after revision: bumped score from 5 to 7 because (1) authors did many improvements to the paper, and (2) their explanations shed light on some of my concerns", "This paper proposes a novel reinforcement learning algorithm containing several contributions made by the authors: 1) a policy gradient algorithm that uses value function estimates to improve the policy gradient, 2) a distributed multi-step off-policy algorithm to estimate the value function, 3) an experience replay buffer mechanism that can handle sequences and (4) a distributed architecture, where threads are dedicated to either learning or interracting with the environment. Most contributions consist in improvements to handle multi-step trajectories instead of single step transitions. The resulting algorithm is evaluated on the ATARI domain and shown to outperform other similar algorithms, both in terms of score and training time. Ablation studies are also performed to study the interest of the 4 contributions. \n\nI find the paper interesting. It is also well written and reasonably clear. The experiments are large, although I was disappointed that PPO was not included in the evaluation, as this algorithm also trains much faster than other algorithms.\n\nquality\n+ several contributions\n+ impressive experiments\n\nclarity\n- I found the replay buffer not as clear as the other parts of the paper.\n. run time comparison: source of the code for the baseline methods?\n+ ablation study showing the merits of the different contributions\n- Methods not clearly labeled. For example, what is the difference between Reactor and Reactor 500M?\n\noriginality\n+ 4 contributions\n\nsignificance\n+ important problem, very active area of research\n+ comparison to very recent algorithms\n- but no PPO in the evaluation", "This paper presents a new reinforcement learning architecture called Reactor by combining various improvements in\ndeep reinforcement learning algorithms and architectures into a single model. The main contributions of the paper\nare to achieve a better bias-variance trade-off in policy gradient updates, multi-step off-policy updates with\ndistributional RL, and prioritized experience replay for transition sequences. The different modules are integrated\nwell and the empirical results are very promising. The experiments (though limited to Atari) are well carried out and\nthe evaluation is performed on both sample efficiency and training time.\n\nPros:\n1. Nice integration of several recent improvements in deep RL, along with a few novel tricks to improve training.\n2. The empirical results on 57 Atari games are impressive, in terms of final scores as well as real-time training speed.\n\nCons:\n1. Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames. While the\nreactor trains much faster, it does use more parallel compute, so the comparison with Rainbow on wall clock time is\n not entirely fair. Would a distributed version of Rainbow perform better in this respect?\n2. Empirical comparisons are restricted to the Atari domain. The conclusions of the paper will be much stronger if\nresults are also shown on other environments like Mujoco/Vizdoom/Deepmind Lab.\n3. Since the paper introduces a few new ideas like prioritized sequence replay, it would help if a more detailed analysis\n was performed on the impact of these individual schemes, even if in a model simpler than the Reactor. For instance, one could investigate the impact of prioritized sequence replay in models like multi-step DQN or recurrent DQN. This will help us understand the impact of each of these ideas in a more comprehensive fashion.\n\n\n", "Thanks!\n\nI can definitely imagine it was hard to make a proper comparison to Rainbow within such a short timeframe. I still think such a comparison would be quite valuable, to better evaluate the impact of their respective unique components. I'm afraid we are back to a situation where it's not clear what works best -- I guess that's the curse of the Atari benchmark.\n\nI appreciate the many improvements to the paper (though I lack time to look at them thoroughly), in particular the Appendix section on the comparisons with Rainbow. I admit I had read your paper as a DQN extension, while it makes more sense to see it as an A3C extension. I'll change my score to acceptance.\n\nNB: I disagree with the statement that \"In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps\". It has slightly higher median normalized score, but lower Elo score. I don't think we can draw a solid conclusion from this (like claiming that \"Reactor generalizes better to these unseen starting states\").\n\nAlso if you can fix this typo in a final version, it looks like you added a \"i=1\" in eq. 12's sum, but forgot its upper bound.", "We have just added a new revision addressing the reviewer comments, which we much appreciate.", "Thank you very much for your review and recognising novelty of our contributions.\n\n>> I found the replay buffer not as clear as the other parts of the paper.\n\nWe will do our best to clarify the description, most likely in the appendix given space limitations.\n\n>> Methods not clearly labeled. For example, what is the difference between Reactor and Reactor 500M?\n\nWe will clarify the labels. `Reactor 500M` denotes the performance of Reactor at 500 million training steps. \n\n>> but no PPO in the evaluation\n\nThe PPO paper did not present results at 200M frames but at 40M frames, and their results seem to be weaker than ACER on 40M frames: ACER was better than PPO on 28/49 games tested. For the purpose of comparison to other algorithms, we chose to evaluate all algorithms at (at least) 200M frames, and Reactor is much better than ACER on 200M frames. Unfortunately, we don’t know how PPO perform at 200M frames, so a direct comparison is impossible.\n", " Thank you very much for your helpful review.\n\n>> There is almost no discussion on the differences between Reactor and Rainbow\n>> I assume that this submission was mostly written before Rainbow came out, and that comparisons to Rainbow were hastily added just before the ICLR deadline\n\nAdmittedly, the comparisons with Rainbow were less detailed than we would have liked. Please note that Rainbow was put on Arxiv only three weeks before the ICLR submission deadline. However we have already included experimental comparisons with Rainbow, both in the form of presenting the learning curves and final evaluations. We will add a more in-depth comparison with Rainbow and discussion of related work in the appendix.\n\n>> I believe a comparison between Reactor and Rainbow needs to either have them both parallelized or none of them.\n\nRainbow works on GPUs, Reactor works on CPUs. A single GPU is not equivalent to a single CPU. Parallelizing Rainbow is out of the scope of this work. First, because this was not the focus of our work. Second, because it would be a non-trivial task potentially worth publication on its own. More generally, the same parallelization argument would also apply to comparisons between A3C and DQN.\n\n>> Rainbow uses the traditional feedforward DQN architecture while Reactor uses a recurrent network. It is not clear to which extent this has an impact on the results.\n\n\nThere are many differences between Rainbow and Reactor: 1) LSTM vs frame stacking, 2) actor-critic vs value-based algorithm 3) beta-LOO vs Q-learning, 4) Retrace vs n-step learning, 5) sequence prioritization vs transition prioritization, 6) entropy bonus vs noisy networks. Reactor is not an incremental improvement of Rainbow and is a completely different algorithm. This makes it impractical to compare on a component-by-component basis. For the most important contributions we performed an ablation study within Reactor’s framework, but naturally we can not ablate every architectural choice that we have made.\n\n>> Rainbow was stopped at 200M steps, at which point it seems to be overall superior to Reactor at 200M steps.\n\nThis is not correct. In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps. In the no-op-starts evaluation Rainbow significantly outperforms Reactor at 200M steps. Both Reactor and Rainbow were trained with 30 random no-op-starts. Their evaluation with 30 random human starts shows how well each algorithm generalizes to new initial conditions. We would argue that the issues of generalization here are similar to those seen between training and testing error in supervised learning. We thus show that Reactor generalizes better to these unseen starting states.\n\n>> (network architecture): a simple diagram in the appendix would make it much easier to understand\n\nWe will add the diagram to the supplementary material.\n\n>> again a visual illustration of the partitioning scheme would in my opinion help clarify the approach\n\nWe will add an illustration to the supplementary material. We will also correct all other typos mentioned in the review. Thank you for taking note of them.\n", "We were happy to see that the reviewer recognised the novelty both the introduced ideas (prioritization, distributional Retrace and the beta-LOO policy gradient algorithm) and integration of the ideas into a single agent architecture.\n\n>> Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames\n\nThis is not correct. In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps. In the no-op-starts evaluation Rainbow significantly outperforms Reactor at 200M steps. Both Reactor and Rainbow were trained with 30 random no-op-starts. Their evaluation with 30 random human starts shows how well each algorithm generalizes to new initial conditions. We would argue that the issues of generalization here are similar to those seen between training and testing error in supervised learning. We thus show that Reactor generalizes better to these unseen starting states.\n\n>> While the Reactor trains much faster, it does use more parallel compute, so the comparison with Rainbow on wall clock time is not entirely fair.\n\nThe reviewer is right in the sense that Reactor executes more floating point operations per second, but it trains much shorter in wall time resulting in an overall similar number of computations executed. We make no claim that Reactor uses overall less computational operations to train an agent. Nevertheless, we believe that having a fast algorithm in terms of wall time is important because of the potential to shorten experimentation time. The measure is still informative, as one may choose Reactor over Rainbow when multiple CPU machines are available (as opposed to a single GPU machine).\n\n>> Empirical comparisons are restricted to the Atari domain.\n\nWe focused on Atari domain to facilitate the comparison to the prior work.\n\n>> Since the paper introduces a few new ideas like prioritized sequence replay, it would help if a more detailed analysis was performed on the impact of these individual schemes\n\nThe paper already contains the ablation study comparing relative importances of individual components. Since the number of novel contributions is large (beta-LOO, distributional retrace, prioritized sequence replay), it is difficult to explore all possible configurations of the components.\n" ]
[ -1, 7, 7, 7, -1, -1, -1, -1, -1 ]
[ -1, 4, 2, 4, -1, -1, -1, -1, -1 ]
[ "SJqjT_ZGf", "iclr_2018_rkHVZWZAZ", "iclr_2018_rkHVZWZAZ", "iclr_2018_rkHVZWZAZ", "HJgzROWzf", "iclr_2018_rkHVZWZAZ", "r1MU1AtlG", "SJRs56Ylz", "rksMwz9xG" ]
iclr_2018_HkUR_y-RZ
SEARNN: Training RNNs with global-local losses
We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task.
accepted-poster-papers
This paper generally presents a nice idea, and some of the modifications to searn/lols that the authors had to make to work with neural networks are possibly useful to others. Some weaknesses exist in the evaluation that everyone seems to agree on, but disagree about importance (in particular, comparison to things like BLS and Mixer on problems other than MT). A few side-comments (not really part of meta-review, but included here anyway): - Treating rollin/out as a hyperparameter is not unique to this paper; this was also done by Chang et al., NIPS 2016, "A credit assignment compiler..." - One big question that goes unanswered in this paper is "why does learned rollin (or mixed rollin) not work in the MT setting." If the authors could add anything to explain this, it would be very helpful! - Goldberg & Nivre didn't really introduce the _idea_ of dynamic oracles, they simply gave it that name (e.g., in the original Searn paper, and in most of the imitation learning literature, what G&n call a "dynamic oracle" everyone else just calls an "oracle" or "expert")
train
[ "H1_0NDUEG", "S1KZ3x5ef", "S1rKPVcgz", "SJEBLCSZM", "HyM2dLTmG", "rJPavIa7G", "ry80LL6Qf", "Syk3BIa7M", "HkMLxA5Mf", "ByamFTczz", "SyoNr6cMG", "BktgJQZMz", "B1hvAzWMf", "S1YZ0GbzG", "BJWspGbzz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "While the paper has been improved, my main concern \"lack of comparison against previous work and unclear experiments\" remains. As the authors acknowedge, the experiments I have argued are missing are sensible and they would provide the evidence to support the claims about the suitability of the proposed IL-based method to RNN training. However they are not there, and thus, while the idea is good, I don't believe it is ready for publication and hence I stand by my original rating. Also, I still believe that a paper introducing a new algorithm doesn't help itself by putting the algorithm in the appendix. Also note, that previous work like SEARN and LOLS is explicit about the choices of rollins and rollouts, they are not \"hyper-parameters\".", "This paper extends the concept of global rather than local optimization from the learning to search (L2S) literature to RNNs, specifically in the formation and implementation of SEARNN. Their work takes steps to consider and resolve issues that arise from restricting optimization to only local ground truth choices, which traditionally results in label / transition bias from the teacher forced model.\n\nThe underlying issue (MLE training of RNNs) is well founded and referenced, their introduction and extension to the L2S techniques that may help resolve the issue are promising, and their experiments, both small and large, show the efficacy of their technique.\n\nI am also glad to see the exploration of scaling SEARNN to the IWSLT'14 de-en machine translation dataset. As noted by the authors, it is a dataset that has been tackled by related papers and importantly a well scaled dataset. For SEARNN and related techniques to see widespread adoption, the scaling analysis this paper provides is a fundamental component.\n\nThis reviewer, whilst not having read all of the appendix in detail, also appreciates the additional insights provided by it, such as including losses that were attempted but did not result in appreciable gains.\n\nOverall I believe this is a paper that tackles an important topic area and provides a novel and persuasive potential solution to many of the issues it highlights.\n\n(extremely minor typo: \"One popular possibility from L2S is go the full reduction route down to binary classification\")", "This paper proposes an adaptation of the SEARN algorithm to RNNs for generating text. In order to do so, they discuss various issues on how to scale the approach to large output vocabularies by sampling which actions the algorithm to explore.\n\nPros:\n- Good literature review. But the future work on bandits is already happening:\nPaper accepted at ACL 2017: Bandit Structured Prediction for Neural Sequence-to-Sequence Learning. Julia Kreutzer, Artem Sokolov, Stefan Riezler.\n\n\nCons:\n- The key argument of the paper is that SEARNN is a better IL-inspired algorithm than the previously proposed ones. However there is no direct comparison either theoretical or empirical against them. In the examples on spelling using the dataset of Bahdanau et al. 2017, no comparison is made against their actor-critic method. Furthermore, given its simplicity, I would expect a comparison against scheduled sampling.\n\n- A lot of important experimental details are in the appendices and they differ among experiments. For example, while mixed rollins are used in most experiments, reference rollins are used in MT, which is odd since it is a bad option theoretically. Also, no details are given on how the mixing in the rollouts was tuned. Finally, in the NMT comparison while it is stated that similar architecture is used in order to compare fairly against previous work, this is not the case eventually, as it is acknowledged at least in the case of MIXER. I would have expected the same encoder-decoder architecture to have been used for all the methods considered.\n \n- the two losses introduced are not really new. The log-loss is just MLE, only assuming that instead of a fixed expert that always returns the same target, we have a dynamic one. Note that the notion of dynamic expert is present in the SEARN paper too. Goldberg and Nivre just adapted it to transition-based dependency parsing. Similarly, since the KL loss is the same as XENT, why give it a new name?\n\n- the top-k sampling method is essentially the same as the targeted exploration of Goodman et al. (2016) which the authors cite. Thus it is not a novel contribution.\n \n- Not sure I see the difference between the stochastic nature of SEARNN and the online one of LOLS mentioned in section 7. They both could be mini-batched similarly. Also, not sure I see why SEARNN can be used on any task, in comparison to other methods. They all seem to be equally capable.\n\nMinor comments:\n- Figure 1: what is the difference between \"cost-sensitive loss\" and just \"loss\"?\n- local vs sequence-level losses: the point in Ranzato et al and Wiseman & Rush is that the loss they optimizise (BLEU/ROUGE) do not decompose over the the predictions of the RNNs.\n- Can't see why SEARNN can help with the vanishing gradient problem. Seem to be rather orthogonal.\n", "The paper proposes new RNN training method based on the SEARN learning to search (L2S) algorithm and named as SeaRnn. It proposes a way of overcoming the limitation of local optimization trough the exploitation of the structured losses by L2S. It can consider different classifiers and loss functions, and a sampling strategy for making the optimization problem scalable is proposed. SeaRnn improves the results obtained by MLE training in three different problems, including a large-vocabulary machine translation. In summary, a very nice paper.\n\nQuality: SeaRnn is a well rooted and successful application of the L2S strategy to the RNN training that combines at the same time global optimization and scalable complexity. \n\nClarity: The paper is well structured and written, with a nice and well-founded literature review.\n\nOriginality: the paper presents a new algorithm for training RNN based on the L2S methodology, and it has been proven to be competitive in both toy and real-world problems.\n\nSignificance: although the application of L2S to RNN training is not new, the contribution to the overcoming the limitations due to error propagation and MLE training of RNN is substantial.\n", "Loss name: \"But as you say, no novel losses are introduced, hence no new names are warranted.\"\n\nWe really apologize but we still don't understand why you are saying that we use a \"new name\" for our cost-sensitive loss. When naming the loss \"Kullback-Leibler divergence (KL)\", we are simply using the standard statistic term without any intention of using a new name for the sake of sounding more novel. We simply prefer the term 'KL' to the term 'XENT' for the reasons evoked in our previous reply. We also believe that using the term ‘MLE’ instead of ‘logloss’ would be detrimental to the general understanding of the method, as the MLE training mode of RNN refers to the traditional training mode. \n\nWe will be happy to revise our paper if you have an explicit recommendation on that point.\n\nPaper writing recommendation (see general answer).\n\nThanks again for all your feedback.", "Generality of SEARNN: \"But SEARNN is claimed to be widely applicable, thus I expect it to be consistently defined across tasks when compared to previous work\"\n\nYou are also concerned about the lack of generality of SEARNN due to the fact that the best rollin strategies are not always consistent within tasks. We believe this is not a problem as we consider SEARNN (as LOLS and SEARN) to be a meta algorithm, and the choice of rollin and rollout strategies to be hyperparameters of the method (similar to the mixing parameter in SEARN). We hope that this view addresses your concern.\n\nAbout the mixin rollout parameter: \"couldn't have known that this is the case for all experiments.\"\n\nWe have added that more explicitly (page 6, experiments paragraph). Sorry for the confusion. \n\n\"Similar architectures\": see general answer.\n", "Theoretical comparisons: We also agree that having theoretical results such as the one presented in Chang's paper would be a nice addition to our paper. We leave this interesting developments as future work.\n\nExperimental comparison & paper writing: see general answer.\n\nHypothesis: Concerning our hypothesis \"Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in.\", we want to stress again that we don't claim this is what happens but we are making what we think is a plausible conjecture. The optimization problem changes with the task and hence we don't see why our point is not valid even if we don't observe the same behavior for two different tasks.\n\nAlgorithm: see general answer.", "To begin with, we would like to express our gratitude for your very detailed comments to our response, which helped us understand the points in your initial review a lot better.\n\nWe have proceeded to a revision of the paper according to your comments in order to clarify and correct some of our claims. First, we are now more specific on the kind of tasks SEARNN can tackle (see notably page 9 last sentence). Second, we have added a specific comment about the difference in architecture with MIXER, and made explicit that a direct comparison is not meaningful (see footnote 2 page 8). Third, we agree that there is a subtlety about the algorithm that can lead to a misunderstanding, especially in the context of reference roll-in. In order to improve the clarity of this point, we have been more explicit in the main paper (see Links to RNNs paragraph, section 3) and have added a description of how exactly a reference roll-in is achieved with an RNN (section A.3). The main point is that even for the reference roll-in strategy one still need to use the RNN in order to obtain the hidden states that will be used to initialize the roll-outs. The only difference is that the ground truth is passed to the next cell instead of the model's prediction (teacher forcing like). Finally, following your legitimate recommendation concerning the top-k strategy, we have added the citation to Goodman et al. and our statement at the moment we introduce it (Sampling strategies, page 7).\n\nFinally, we agree with you that the two additional experiments that you are requesting, namely running the actor critic method on our spelling dataset and running SEARNN on the MIXER architecture would be valuable additions to the paper. We will include them in a future revision as soon as we obtain results, but unfortunately we haven’t yet had time to finish these experiments due to the holiday period and the length of training. We apologize for these setbacks.\nHowever, although they would add to the quality of the paper, we still believe that in its current form the paper already contains enough material to deserve publication.\n\nThank you again for your valuable feedback, which enables us to improve the quality of our paper. Note that we have answered other more specific points below your answers.", "- \"In particular, we are not aware of any other RNN training techniques which uses cost-sensitive losses, besides SeaRNN.\"\n\nI don't think this is the case. REINFORCE as used by Ranzato et al. (2016) for RNN training does exactly that (see eq. 11 in their paper): the difference between the reward achieved and the average expected reward is used to scale the gradient of the loss which is propagated through the network.\n\n- \"However, to our knowledge this is the first algorithm which uses the scores obtained by roll-outs to determine the value of the dynamic expert. This is the aspect of the loss which we consider to be novel.\"\n\nA straightforward way to describe your approach is that the roll outs are used to obtain the costs, but then they are dropped and just the action has the minum action is kept. SEARN does exactly the same if one replaces the cost-sensitive learner with cost-insensitive one. The relation to Goldberg and Nivre (2012) explicitly is that they define a heuristic dynamic oracle for their task (which is very efficient to compute), while you do rollouts (which much slower, but not task specific) like SEARN and LOLS. In any case, the multiclass classification loss itself is not changed in any way, thus no new name is warranted. \n\n- \"The novelty in our loss resides in the application of the KL divergence (or equivalently cross-entropy) in a situation where one has access to a full probabilistic distribution over the tokens in the vocabulary instead of a single target output.\"\n\nIndeed. I don't object to your application of KL-divergence/XENT, I only object to having a new name for it. Giving a new name for a loss suggests a novel loss. But as you say, no novel losses are introduced, hence no new names are warranted.\n\n- \"Finally, the top-k strategy is a simplified version of targeted sampling. Indeed, none of the strategies we test (uniform, topk, policy sampling and biased policy sampling) are novel. We acknowledge this in the main text of the paper and we make no claims about novelty with respect to these strategies.\"\n\nNowhere in the paper the statement \"the top-k strategy is a simplified version of targeted sampling\". In the section introducing it no credit is given to previous work, and it is mentioned as a contribution of the paper in the introduction. Goodman et al is only mentioned much later in the conclusion. To avoid such misunderstandings, add this statement where you introduce the top-k strategy.\n\nI believe my review and comments have explicit recommendations for experiments and revisions to the text.", "-\"First off, let us point out that there are no mixed roll-ins in any of the experiments.\":\n\nIndeed, thanks for the clarification. But see my comment about the algorithmic description not really stating the option of a reference rollin policy. In any case, the mixed rollins in their extreme settings cover both reference and learned.\n\n- \"Second, while L2S theory indeed tells us that a learned roll-in should always be preferred to a reference one, on some datasets practitioners observe the reverse\"\n\nIndeed. If the paper was about an algorithm for a particular dataset/task, that would be OK. But SEARNN is claimed to be widely applicable, thus I expect it to be consistently defined across tasks when compared to previous work, but this is not the case.\n\n- \"Third, the value of the mix-in probability for our roll-outs (0.5) is reported in the caption underneath Table 1. It is the same for all datasets.\":\n\nThanks for the clarification, couldn't have known that this is the case for all experiments.\n\n- \"Finally, we do indeed use an architecture that is different from that of MIXER. This information is reported in the main text (see Key takeaways in Section 6)\":\n\nYes, but earlier it reads: \"For fair comparison to related methods, we use a similar architecture\"\n\nReplacing an RNN with a CNN is not similar in my opinion. As I wrote in the first part of my response, Bahdanau et al. (2017) run different experiments for this reason.", "I appreciate the long response to my review. Here are some comments to the response:\n\n- \"Theoretical comparisons:\nAs part of this exploration, we provide numerous theoretical points of comparison in the Discussion section (Section 7): ...\":\n\nI guess I wasn't clear in what I meant by theoretical comparisons. For an example for what I meant and think necessary, see section 3 in the paper by Chang et al. 2015 (cited in the paper). Such an analysis is not conducted in the paper. \n\nBesides that, the concluding point: \"we write that SeaRNN can be used on a wider amount of tasks, compared to some related methods.\" On page 9 it reads: \"In contrast, S EA R NN can be used on any task.\" which is a much stronger claim, and is not supported. You should be clear in the paper: which methods and which (kinds of) tasks.\n\n- \"we compare with schedule sampling (Bengio et al, 2015). They use a mixed roll-in, while we use either a reference or a learned roll-in.\":\n\nI don't think this is correct; mixed roll-ins depending on the parameterization span the spectrum from reference to learned, and everything in between.\n\n- \"As we explain in the caption of Table 1, we cannot directly compare SeaRNN to Actor-Critic on the Spelling dataset, because the authors of this paper used a random test dataset and some key hyper parameters are missing from the open source implementation (we obtained this information through private communication with them when first trying to compare our methods).\"\n\nIn this case you should run the open-source implementation on your data splits to obtain comparable results to yours.\n\n- \"We do provide a point of comparison with Actor-Critic (with the same architecture) on a larger scale dataset, namely IWSLT'14 de-en MT.\"\n\nIn the text of the paper it reads \"For fair comparison to related methods, we use a similar architecture\". In any case, you cannot be similar to both the RNN encoder of Bahdanau et al. (2017) and Wiseman and Rush (2016) and the CNN encoder of Ranzato et al. (2016) unless you try both, which is in fact what Bahdanau et al. (2017) did. I expect you to do the same here.\n\n- \"Finally, we conducted thorough experiments with scheduled sampling on the NMT dataset. Unfortunately, we could not obtain any significant improvement over MLE...\"\n\nIndeed, apologies for having missed this point; I was looking for it in the OCR experiments section. However, the explanation given is convincing: \"Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in.\" If this is the case, then it should have been a problem for SEARNN which obtains its best results with learned roll-ins on OCR and spelling? \n\nAnd related point: you specify the algorithm in the appendix saying:\n\"Run the RNN until t th cell with φ(x b ) as initial state by following the roll-in policy\"\nUsing the RNN means learned, or at least mixed, but cannot be reference which wouldn't be using the RNN. It is confusing that your only experimental comparison with other methods doesn't use the rollins stated in the algorithmic description. \n\n\n", "3. Novelty\n\n\"The two losses introduced are not really new.\"\n\"The top-k sampling method is essentially the same as the targeted exploration of Goodman et al. (2016) which the authors cite. Thus it is not a novel contribution.\"\n\nWe show that these assessments are the result of misunderstandings (in some cases we simply do not make novelty claims, and in others what we propose is actually different from the referred techniques).\n\nFirst, we want to reiterate the difference between a classical classification loss and a cost-sensitive loss, as these notions are fundamental to the whole field of L2S research. In a cost-sensitive classification problem, rather than having access to a single ground-truth output, one has access to a vector of costs, with one cost associated with each possible token. This unusual setup requires adapted losses. In particular, we are not aware of any other RNN training techniques which uses cost-sensitive losses, besides SeaRNN.\n\nSecond, concerning the log-loss (LL), we explain that it indeed shares the structure of MLE, and replaces constant experts by dynamic ones (see ‘Log-loss’ in Section 4). We also point out that this technique is not new, even in the context of RNN training (see our reference to Ballesteros et al, (2016) in 'L2S-inspired approaches' in the Discussion section at the bottom of page 9). We do not make novelty claims in that respect.\nHowever, to our knowledge this is the first algorithm which uses the scores obtained by roll-outs to determine the value of the dynamic expert. This is the aspect of the loss which we consider to be novel.\nIf our claim is unclear we can definitely rephrase it in a way that the reviewer deems more satisfactory.\n\nThird, we are not sure we understand the remark of the reviewer concerning the KL loss. In our setting, the KL divergence and the cross-entropy are indeed equivalent since the additional entropy term in XENT is constant with respect to the parameters of the model. We decided to call it KL as we saw this loss term as a divergence between two probability distributions (and indeed we tried several other divergences, see Appendix C).\nMLE can be thought of as a cross-entropy term between the model output and a Dirac distribution centered on the ground truth target.\nHowever, the difference in our setup is that we have access to a richer, non-Dirac target distribution, which we derive from the cost vectors. The novelty in our loss resides in the application of the KL divergence (or equivalently cross-entropy) in a situation where one has access to a full probabilistic distribution over the tokens in the vocabulary instead of a single target output.\n\nFinally, the top-k strategy is a simplified version of targeted sampling. Indeed, none of the strategies we test (uniform, topk, policy sampling and biased policy sampling) are novel. We acknowledge this in the main text of the paper and we make no claims about novelty with respect to these strategies.\n\nConclusion\nWe believe we have alleviated a number of concerns and clarified some misunderstandings which lead to unfavorable assessments about the paper. In light of these clarifications, we hope the reviewer will consider adjusting their evaluation accordingly, and helping us improve the paper through suggestions.\n\n\nReferences:\nDzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. In ICLR, 2017.\nMiguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. Training with exploration improves a greedy stack-LSTM parser. In EMNLP, 2016.\nSamy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS, 2015.\nKai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé, III, and John Langford. Learning to search better than your teacher. In ICML, 2015.\nHal Daumé, III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 2009.\nMarc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In ICLR, 2016.\nWen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, and J. Andrew Bagnell. Deeply aggrevated: Differentiable imitation learning for sequential prediction. In ICML, 2017.\nSam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. In EMNLP, 2016.", "2. Experimental details\n\n\"A lot of important experimental details are in the appendices and they differ among experiments. »\n\"For example, while mixed rollins are used in most experiments, reference rollins are used in MT, which is odd since it is a bad option theoretically.\"\n\"Also, no details are given on how the mixing in the rollouts was tuned.\"\n\"Finally, in the NMT comparison while it is stated that similar architecture is used in order to compare fairly against previous work, this is not the case eventually, as it is acknowledged at least in the case of MIXER.\"\n\nThe reviewer points out that our experimental setup is unclear. We disagree with that statement and show in the following that all of the relevant information can be found in the main text of the paper and that differences are underlined and analyzed in details. We will strive to present this information more clearly.\n\nFirst off, let us point out that there are no mixed roll-ins in any of the experiments. We compare reference and learned roll-ins for OCR and Spelling (see Table 1 and the caption of Table 2), and use reference roll-ins for NMT, as stated at the beginning of Section 6 (in the middle of page 8) and at the end of this section (see bottom of page 8).\n\nSecond, while L2S theory indeed tells us that a learned roll-in should always be preferred to a reference one, on some datasets practitioners observe the reverse. We confirmed this with the authors of the SEARN paper (Daumé et al, 2009) through private communication.\nWe provide potential explanations in the main text of the paper (see Key takeaways in Section 6, bottom of page 8), namely:\n\n- either our reference policy is too weak to provide good enough training signal\n- or the problem obtained with a learned roll-in might be harder to optimize for than its equivalent obtained with a reference roll-in -- an issue which is overlooked by classical L2S theory.\n\nWe also explain what choice of hyper parameter we advocate, including resorting to a reference roll-in when a learned roll-in does not lead to good performance (see 'Traditional L2S approches', Section 7, top of page 9).\nWe therefore argue that this choice in hyper parameter is made explicit and is motivated in the paper.\n\nThird, the value of the mix-in probability for our roll-outs (0.5) is reported in the caption underneath Table 1. It is the same for all datasets. We do not report any tuning of this value because we did not perform any. We followed Chang et al (2015), where the authors indicate that their algorithm is not sensitive to this value, so we did not feel the need to optimize for it. We will add this reasoning to the paper to explain the value we took.\n\nFinally, we do indeed use an architecture that is different from that of MIXER. This information is reported in the main text (see Key takeaways in Section 6), as we are explicit about the architectures of related methods. The reason for this difference is that we decided to reuse the architecture used both by BSO and by Actor-Critic. We have followed their setup as closely as possible, and are not aware of any meaningful difference with our own. If our presentation is not clear enough, we are happy to add this information at any place the reviewer sees fit.\n\nOnce again, we stress that all of this information is presented *in the main text*, and discussed at length. The only thing present in the appendix is an expanded version of the harder optimization problem hypothesis we make in 'Key takeaways' in Section 6.", "Reviewer2 provides an in-depth and thoughtful review. They express concerns about three potential issues: a lack of comparison to related methods, unclear experiments and erroneous novelty claims. We believe these criticisms stem for the most part from several key misunderstandings about the presented method and the claims made in the paper.\nIn the following, we make explicit these misunderstandings and we strive to clarify them.\nWe hope that reviewer 2 can help us improve the paper by pointing out the specific parts that they found confusing.\n\n1. How does SeaRNN relate to other IL-inspired algorithms?\n\n\"The key argument of the paper is that SEARNN is a better IL-inspired algorithm than the previously proposed ones. However there is no direct comparison either theoretical or empirical against them.\"\n\nWe disagree with this statement and show in the following that the paper does indeed contain both theoretical and empirical comparisons, including a section (Discussion, Section 7) about theoretical comparison to related methods and a large-scale experiment where the performance of various methods is compared.\n\nFirst off, the main aim of the paper is to introduce a novel IL-inspired method for training RNNs which alleviates the issues associated with traditional MLE training. We then contrast different methods and explore their pros and cons. These concrete elements of comparison, both theoretical and empirical, lead us to believe that SeaRNN is indeed well-positioned.\n\nTheoretical comparisons:\nAs part of this exploration, we provide numerous theoretical points of comparison in the Discussion section (Section 7):\n\n- we compare with schedule sampling (Bengio et al, 2015). They use a mixed roll-in, while we use either a reference or a learned roll-in. Furthermore, SeaRNN leverages roll-outs for estimation and custom losses, while schedule sampling simply uses the MLE loss.\n- we underline an important difference between SeaRNN and most related methods (be they RL-inspired e.g. MIXER (Ranzato et al, 2016) and Actor-Critic (Bahdanau et al, 2017) or IL-inspired e.g. BSO (Wiseman et al, 2016)): the fact that since the training signal from their loss is quite sparse, they have to use warm starting, whereas SeaRNN does not.\n- we remark that BSO requires being able to compute the evaluation metric on unfinished sequences (see the definition of the associated loss in (Wiseman and Rush, 2016, Section 4.1)). While this is technically possible for BLEU, the scores obtained this way are arguably not meaningful. In contrast, SeaRNN always computes scores on full sequences.\n- we explain that some IL-inspired methods (see Ballesteros et al, 2016 and Sun et al, 2017) require a free cost-to-go oracle, whereas SeaRNN uses roll-outs for exploration and is thus more widely applicable, albeit at a higher computational cost.\n- Incidentally, the last two points explain why we write that SeaRNN can be used on a wider amount of tasks, compared to some related methods.\n\nEmpirical comparisons:\n\"In the examples on spelling using the dataset of Bahdanau et al. 2017, no comparison is made against their actor-critic method. Furthermore, given its simplicity, I would expect a comparison against scheduled sampling. »\n\nAs we explain in the caption of Table 1, we cannot directly compare SeaRNN to Actor-Critic on the Spelling dataset, because the authors of this paper used a random test dataset and some key hyper parameters are missing from the open source implementation (we obtained this information through private communication with them when first trying to compare our methods).\nWe do provide a point of comparison with Actor-Critic (with the same architecture) on a larger scale dataset, namely IWSLT'14 de-en MT.\nFinally, we conducted thorough experiments with scheduled sampling on the NMT dataset. Unfortunately, we could not obtain any significant improvement over MLE, even with a careful schedule proposed by the authors of the scheduled sampling paper through private communication (note that no positive results on NMT were reported in the original paper either). This is reported in the main text of the paper (see Key takeaways in Section 6, at the bottom of page 8).\nIf the reviewer believes this would add to the paper, we will of course run this algorithm on the OCR and Spelling datasets and report the obtained results (we have not conducted these experiments yet).\n\nAll told, we believe our paper does present theoretical and empirical comparisons to related methods. We have already conducted and reported on some of the experiments the reviewer asks for.", "We thank the reviewers for their thorough and detailed evaluations. We are grateful for all the positive feedback given by the reviewers and their suggestions.\nReviewer 2 expresses some concerns about the paper which seem due to several misunderstandings; we clarify these in a specific response." ]
[ -1, 8, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Syk3BIa7M", "iclr_2018_HkUR_y-RZ", "iclr_2018_HkUR_y-RZ", "iclr_2018_HkUR_y-RZ", "HkMLxA5Mf", "ByamFTczz", "SyoNr6cMG", "iclr_2018_HkUR_y-RZ", "BktgJQZMz", "B1hvAzWMf", "S1YZ0GbzG", "B1hvAzWMf", "S1YZ0GbzG", "S1rKPVcgz", "iclr_2018_HkUR_y-RZ" ]
iclr_2018_SyZipzbCb
Distributed Distributional Deterministic Policy Gradients
This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting. We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. We also combine this technique with a number of additional, simple improvements such as the use of N-step returns and prioritized experience replay. Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions. Our results show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance.
accepted-poster-papers
As identified by most reviewers, this paper does a very thorough empirical evaluation of a relatively straightforward combination of known techniques for distributed RL. The work also builds on "Distributed prioritized experience replay", which could be noted more prominently in the introduction.
train
[ "Byqj1QtlM", "r1Wcz1clz", "Bk3bXW5gM", "HJ9K0RXEf", "ryuoGzKMz", "r14lUMKMf", "HJ3eWMFzG", "SkfngGYMG", "ryjnMgWZz", "HJkIEozeM", "HJt1T-2R-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "public", "public", "public" ]
[ "A DeepRL algorithm is presented that represents distributions over Q values, as applied to DDPG,\nand in conjunction with distributed evaluation across multiple actors, prioritized experience replay, and \nN-step look-aheads. The algorithm is called Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG.\nSOTA results are generated for a number of challenging continuous domain learning problems,\nas compared to benchmarks that include DDPG and PPO, in terms of wall-clock time, and also (most often) in terms\nof sample efficiency.\n\npros/cons\n+ the paper provides a thorough investigation of the distributional approach, as applied to difficult continuous\n action problems, and in conjunction with a set of other improvements (with ablation tests)\n- the story is a bit mixed in terms of the benefits, as compared to the non-distributional approach, D3PG\n- it is not clear which of the baselines are covered in detail in the cited paper:\n \"Anonymous. Distributed prioritized experience replay. In submission, 2017.\", \n i.e., should readers assume that D3PG already exists and is attributable to this other submission?\n\nOverall, I believe that the community will find this to be interesting work.\n\nIs a video of the results available?\n\nIt seems that the distributional model often does not make much of a difference, \nas compared to D3PG non-prioritized. However, sometimes it does make a big difference, i.e., 3D parkour; acrobot.\nDo the examples where it yields the largest payoff share a particular characteristic?\n\nThe benefit of the distributional models is quite different between the 1-step and 5-step versions. Any ideas why?\n\nOccasionally, D4PG with N=1 fails very badly, e.g., fish, manipulator (bring ball), swimmer.\nWhy would that be? Shouldn't it do at least as well as D3PG in general?\n\nHow many atoms are used for the categorical representation?\nAs many as [Bellemare et al.], i.e., 51 ?\nHow much \"resolution\" is necessary here in order to gain most of the benefits of the distributional representation?\n\nAs far as I understand, V_min and V_max are not the global values, but are specific to the current distribution.\nHence the need for the projection. Is that correct?\n\nWould increasing the exploration noise result in a larger benefit for the distributional approach?\n\nFigure 2: DDPG performs suprisingly poorly in most examples. Any comments on this,\nor is DDPG best avoided in normal circumstances for continuous problems? :-)\n\nIs the humanoid stand so easy because of large (or unlimited) torque limits?\n\nThe wall-clock times are for a cluster with K=32 cores for Figure 1?\n\n\"we utilize a network architecture as specified in Figure 1 which processes the terrain info in order to reduce its dimensionality\"\nFigure 1 provides no information about the reduced dimensionality of the terrain representation, unless I am somehow failing to see this.\n\n\"the full critic architecture is completed by attaching a critic head as defined in Section A\"\nI could find no further documenation in the paper with regard to the \"head\" or a separate critic for the \"head\".\nIt is not clear to me why multiple critics are needed.\n\nDo you have an intuition as to why prioritized replay might be reducing performance in many cases?\n", "The paper investigates a number of additions to DDPG algorithm and their effect on performance. The additions investigated are distributional Bellman updates, N-step returns, and prioritized experience replay.\n\nThe paper does a good job of analyzing these effects on a wide range of continuous control tasks, from the standard benchmark suite, to hand manipulation, to complex terrain locomotion and I believe these results are valuable to the community.\n\nHowever, I have a concern about the soundness of using N-step returns in DDPG setting. When a sequence of length N is sampled from the replay buffer and used to calculate N-step return, this sequence is generated according a particular policy. As a result, experience is non-stationary - for the same state-action pair, early iterations of the algorithm will produce structurally different (not just due to stochasticity) N-step returns because the policy to generate those N steps has changed between algorithm iterations. So it seems to me the authors are using off-policy updates where strictly on-policy updates should be used. I would like some clarification from the authors on this point, and if it is indeed the case to bring attention to this point in the final manuscript.\n\nIt would also be useful to evaluate the effect of N for values other than 1 and 5, especially given the significance this addition has on performance. I can believe N-step returns are useful, possibly due to effectively enlarging simulation timestep, but it would be good to know at which point it becomes detrimental.\n\nI also believe \"Distributional Policy Gradients\" is an overly broad title for this submission as this work still relies on off-policy updates and does not tackle the problem of marrying distributional updates with on-policy methods. \"Distributional DDPG\" or \"Distributional Actor-Critic\" or variant perhaps could be more fair title choices?\n\nAside from these concerns, lack of originality of contributions makes it difficult to highly recommend the paper. Nonetheless, I do believe the experimental evaluation if well-conducted and would be of interest to the ICLR community. ", "\nComment: The paper proposes a simple extension to DDPG that uses a distributional Bellman operator for critic updates, and introduces two simple modifications which are the use of N-step returns and parallelizing evaluations. The method is evaluated on a wide variety of many control and robotic talks. \n\nIn general, the paper is well written and organised. However I have some following major concerns regarding the quality of the paper:\n\n- The proposal, D4PG, is quite straightforward which is simply use the idea of distributional value function by Bellemare et al. (previously used in DQN). Two modifications are also simple and well-known techniques. It would be nicer if the description in Section 3 is less straightforward by giving more justifications and analysis why and how distributional updates are necessary in the context of policy search methods like DDPG. \n\n- A positive side of the paper is a large set of evaluations on many different control and robotic tasks. For many tasks, D4PG performs better than the variant that does not use distributional updates (D3PG), however by not much. There are some tasks showing no-difference. On the other hand, the choice of N=5 in comparisons is hard to understand and lacks further experimental justifications. Different setting and new performance metrics (e.g. data efficiency, number of episodes in total) might also reveal more properties of the proposed methods.\n\n\n\n* Other minor comments:\n\n- Algorithm 1 consists of two parts but there are connection between them. It might be confused for ones who are not familiar with the actor-critic framework.\n\n- It would be nicer if all expectation operators in Section 3 comes with corresponding distributions. \n\n- page 2, second paragraph: typos in \"hence my require less samples to learn\"\n\n- it might be better if the reference on arXiv should be changed to relevant publication conferences with archival proceeding: work by Marc G. Bellemare at ICML 2017", "Thanks for sharing a video! \n\nI have a few questions - a performance looks more poor compare to the original PPO results. Also I've noticed that more challenging environments aren't present. Did you investigate what is a reason of more poor behavior? Can PPO achieve large final reward, but is more slower at start? Or with PPO you've done more experiments and and have larger choice from the best.\n\nI would be bery good to have this analysis in the paper - to compare maximum possible reward for PPO and D4PG and in addition distribution of it's final values for training with different seeds. How maximum rewards are compared? Mean? Variance? Not only with the same number of steps like in the paper.", "Thank you!\n\nAs to the baselines, we use the same framework for distributing computation and prioritization as in the cited paper (Anonymous. Distributed prioritized experience replay, also submitted to ICLR). However this other work focuses primarily on discrete-action tasks.\n\nVideos of the parkour performance can be found at https://www.youtube.com/playlist?list=PLFU7BiIwAjPDqsIL9OLm1z7_RXZA1Jyfj.\n\nWe have also found that the distributional model helps most in harder, higher-dimensional tasks. The main characteristic these tasks seem to share is the time/data required with which the solve the task. Potentially due to the complexity of learning the Q-function.\n\nWe found in general, for both distributional and non-distributional, that the 5-step version provided better results. Although this is not fully corrected for (see answers to the above reviewers) we found this to experimentally provide quite a bit of benefit to all variations of the algorithm.\n\nWe used 51 atoms across all tasks except for the humanoid parkour task which used 101. The level of resolution necessary will depend on the problem under consideration, and controlled by the combination of the number of atoms as well as the V_{min,max} values, however we found this to be relatively robust. Here we changed the number of atoms for the humanoid task in order to keep the resolution roughly consistent with other tasks.\n\nThe V_min and V_max values are global values that bound the support of the distribution. However, you are correct that this is what requires the projection. When applying the Bellman operator to a distribution it will more than likely lie outside the bounds given by V_min/V_max, so we project in order to ensure that our distributions are always within these bounds. Again, we also found these values to be relatively robust and we generally set these given knowledge of the maximum immediate reward of the system.\n\nWe did not extensively experiment with increasing the exploration noise, but from preliminary experiments we saw that the algorithm was fairly robust to this value. Deviating from the values we used did not significantly hurt nor hinder the algorithm’s performance.\n\nThe poor performance of DDPG in these experiments is primarily due to the fact that DDPG is quite slow to learn. For the easier control suite tasks DDPG is actually a feasible algorithm if given enough time. However for the harder tasks (any of the humanoid tasks, manipulator, and parkour tasks) DDPG would take much too long to work effectively. Finally, one of the bigger problems DDPG has is that it can exhibit quite unstable learning which is not exhibited by D4PG.\n\nThe easy-ness of the humanoid stand task is more due to the fact that it has less complicated motions to make than any of the other humanoid tasks.\n\nThe wall-clock times are for 32 cores on separate machines. We found communication across machines to be fast enough that having them all be on the same machine was not a requirement.\n\nWe apologize that the description of the network architecture was poorly explained and will correct it. The networks have two branches, one of which process the the terrain info to produce a lower-dimensional hidden state before combining it with the proprioceptive information. Utilizing this second branch to process the proprioceptive information and reduce it to a smaller number of hidden units is what we refer to as “reducing its dimensionality” however we will explain this better.\n\nWe will also explain critic architecture and what we refer to as “heads” further. Here we refer to the “distributional output” component of the network as a head. In this way we can replace the Categorical output with a Mixture of Gaussians output as described in section A. By “head” we only mean this final component which takes the last set of hidden units, passes them through a linear layer, and outputs the parameters of a distribution.\n", "We have found that D4PG was very stable and robust to its hyperparameter settings. We generally found that carefully tuning the learning rates was unnecessary, and this also allowed us to eliminate the Ornstein-Uhlenbeck noise.\n\nAs to the results on parkour: we have not yet re-run the experiments on the humanoid, however for the 2d-walker these results are approximately at their maximum. And we can see that D4PG outperforms PPO in this setting.\n\nWith regards to stability it is well known the DDPG can be quite unstable. As noted above we don’t really see any of these issues with D4PG and it is in fact very stable, both in terms of its behavior during a run, across different seeds, and across different settings of hyperparameters. We haven’t significantly experimented with scaling the number of actors for D4PG, however while we do tend to see performance improvements as we increase the number of workers, we kept this number fixed with the number used in PPO.\n\nFinally, videos of the parkour performance can be found at: https://www.youtube.com/playlist?list=PLFU7BiIwAjPDqsIL9OLm1z7_RXZA1Jyfj.\n", "Thanks for the helpful review!\n\nThe reason for our use of N-step returns is it allows us to compute the returns as soon as they are collected and insert into replay without storing full sequences. This is done for efficiency reasons. For N>1 this ignores the difference between the behavior and target policies. This could be corrected using an off-policy correction such as Retrace (Safe and Efficient Off-Policy Reinforcement Learning, Munos et al., 2016) but that would require storing full trajectories.\n\nHowever, for reasonably small N this difference is not great, which is what we show in our experiments. With N much larger than the value of 5, we see a degradation in performance for exactly this reason. We include further discussion of exactly this point.", "Thank you for the review!\n\nAs to the necessity of the distributional updates, the DPG algorithm relies heavily on the accuracy of the value function estimate due to the fact that the gradient computed under the DPG theorem is based only on gradients of the policy pi and gradients of the Q-function. By better estimating the Q-function we directly impact the accuracy of the policy gradient. We will include further discussion of this.\n\nIt is true that the distributional version (D4PG) does not always out-perform the non-distributional version (D3PG). However this is typically on easier tasks. In the control suite of tasks the distributional version significantly out-performs on the acrobot, humanoid, and swimmer set of tasks. For manipulation tasks this holds for the hardest pickup and orient task. And finally for all parkour tasks. So for tasks that are already somewhat easy to solve there are limited immediate gains, but for harder tasks this update tends to help (and help significantly for the parkour tasks).\n\nThe choice of a higher N is suggested by algorithms such as A3C and Rainbow, among others. Note that the Rainbow algorithm (Rainbow: Combining Improvements in Deep Reinforcement Learning, Hessel et al, 2017) utilizes an off-policy Q-learning update with uncorrected n-step returns, in a very similar way to that used by D4PG. In order to fully correct for this we should be using an off-policy correction, which we have not used for reasons of efficiency (see our response to the next reviewer). However, experimentally we have shown that this minor modification helps quite significantly and can be used directly in any off-policy algorithm. In all of our experiments and across both distributional and non-distributional updates it tends to be better to use the higher N. We did find that increasing N much higher than N>5 tended to degrade performance, which makes sense as this would be more off-policy. We will include further discussion of this aspect of the algorithm.\n", "I suppose it should have been questions not to my comment but to the authors of the paper?\n\nAlso I'd like to refresh my question to authors - do you plan to release a videos showing parkour and robotic hand training results? It's almost a standard for RL research and lack of them can cause some unnecessary suspects and questions. \n\nIn addition videos can show quality of trained policies. \n\n", "1) I had seen some convergence issues when I implemented something similar. Did you face anything similar? How important was the power of the neural approximator and the size of the distribution support set (in case of multinomial distribution)?\n\n2) Does the extra 'distributed' improve speed or quality of convergence?", "Hi,\n\nCan you clarify a few questions about experimental set-up and results? In section 4 Result you've describe a few important design choices:\n \n1) You've chosen a fixed Gaussian noise for exploration and made a statement that Ornstein-Uhlenbeck didn't add to performance in your experiments. which contradicts a known results for DDPG. Can you provide a comparison plots and describe an experimental set-up supporting this statement for D4PG?\n2) You've chosen the same learning rate for actor and critic, what is a bit different from common practice for DDPG, when a critic usually has leaning rate an order of magnitude higher the same for actor. What is a justification of such a choice? Can provide some experimental results showing performance of a few different choices of learning rates for actor and critic?\n\nIn 4.3, Parkour section you showed results of the evaluation of different variants of D4PG and D3PG and comparison vs PPO. But it's unclear what a performance level do they correspond:\n\n3) Can you provide a few videos of the final performance for best variants of D4PG and PPO for Walker2d and Humanoid?\n4) You compare with PPO in wall time and number of actors steps. But they are not the only possible metrics for comparison. Can you provide some insights on:\n a) Maximum rewards: What is a maximum performance D4PG vs PPO? How maximum rewards achieved with this 2 algorithm can be compared if perform longer training?\n b) Stability: PPO is known to be very stable algorithm, while DDPG is much more sensitive to the hyperparameters choice. How a stability of D4PG training can be compared to PPO?\n c) Scalability: How well D4PG is scaling with increasing number of parallel workers in comparison with PPO?\n5) Do you plan to release your implementation for D4PG?" ]
[ 9, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyZipzbCb", "iclr_2018_SyZipzbCb", "iclr_2018_SyZipzbCb", "r14lUMKMf", "Byqj1QtlM", "HJt1T-2R-", "r1Wcz1clz", "Bk3bXW5gM", "HJkIEozeM", "HJt1T-2R-", "iclr_2018_SyZipzbCb" ]
iclr_2018_ry80wMW0W
Hierarchical Subtask Discovery with Non-Negative Matrix Factorization
Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework. The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks. In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain. We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains. Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.
accepted-poster-papers
Overall this paper seems to make an interesting contribution to the problem of subtask discovery, but unfortunately this only works in a tabular setting, which is quite limiting.
val
[ "H16Hn-6lf", "HJo-rvwWz", "BkITkWpZM", "ry1tEdTmM", "HJbgmvpXf", "HJf6zwpmf", "HJi81v6mG", "HJ2ekDp7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes a formulation for discovering subtasks in Linearly-solvable MDPs. The idea is to decompose the optimal value function into a fixed set of sub value functions (each corresponding to a subtask) in a way that they best approximate (e.g. in a KL-divergence sense) the original value.\n\nAutomatically discovering hierarchies in planning/RL problems is an important problem that may provide important benefits especially in multi-task environments. In that sense, this paper makes a reasonable contribution to that goal for multitask LMDPs. The simulations also show that the discovered hierarchy can be interpreted. Although the contribution is a methodological one, from an empirical standpoint, it may be interesting to provide further evidence of the benefits of the proposed approach. Overall, it would also be useful to provide a short paragraph about similarities to the literature on discovering hierarchies in MDPs. \n\nA few other comments and questions: \n\n- This may be a fairly naive question but given your text I'm under the impression that the goal in LMDPs is to find z(s) for all states (and Z in the multitask formulation). Then, your formulation for discovery subtasks seems to assume that Z is given. Does that mean that the LMDPs must first be solved and only then can subtasks be discovered? (The first sentence in the introduction seems to imply that there's hope of faster learning by doing hierarchical decomposition).\n\n- You motivate your approach (Section 3) using a max-variance criterion (as in PCA), yet your formulation actually uses the KL-divergence. Are these equivalent objectives in this case?\n\n\nOther (minor) comments: \n\n- In Section it would be good to define V(s) as well as 'i' in q_i (it's easy to mistake it for an index). ", "The paper builds upon the work of Saxe et al on multitask LMDP and studies how to automatically discover useful subtasks. The key idea is to perform nonnegative matrix factorization on the desirability matrix Z to uncover the task basis.\n\nThe paper does a good job in illustrating step by step how the proposed algorithms work in simple problems. In my opinion, however, the paper falls short on two particular aspects that needs further development:\n\n(1) As far as I can tell, in all the experiments the matrix Z is computed from the MDP specification. If we adopt the proposed algorithm in an actual RL setting, however, we will need to estimate Z from data since the MDP specification is not available. I would like to see a detailed discussion on how this matrix can be estimated and also see some RL experiment results. \n\n(2) If I understand correctly, the row dimension of Z is equal to the size of the state space, so the algorithm can only be applied to tabular problem as-is. I think it is important to come up with variants of the algorithm that can scale to large state spaces.\n\nIn addition, I would encourage the authors to discuss connection to Machado et al. Despite the very different theoretical foundations, both papers deal with subtask discovery in HRL and appeal to matrix factorization techniques. I would also like to point out that this other paper is in a more complete form as it clears the issues (1) and (2) I raised above. I believe the current paper should also make further development in these two aspects before it is published.\n\nMinor problems:\n- Pg 2, \"... can simply be taken as the resulting Markov chain under a uniformly random policy\". This statement seems problematic. The LMDP framework requires that the agent can choose any next-state distribution that has finite KL divergence from the passive dynamics, while in a standard MDP, the possible next-state distribution is always a convex combination of the transition distribution of different actions.\n\nReferences\nMachado et al. ICML 2017. A Laplacian Framework for Option Discovery in Reinforcement Learning.", "The present paper extends a previous work by Saxe et al (2017) that considered multitask learning in RL and proposed a hierarchical learner based on concurrent execution of many actions in parallel. That framework made heavy use of the framework of linearly solvable Markov decision process (LMDP) proposed by Todorov, which allows for closed form solutions of the control due to the linearity of the Bellman optimality equations. The simple form of the solutions allow them to be composed naturally, and to form deep hierarchies through iteration. The framework is restricted to domains where the transitions are fixed but the rewards may change between tasks. A key role is played in the formalism by the so-called ‘free dynamics’ that serves to regularize the action selected. \n\nThe present paper goes beyond Saxe et al. in several ways. First, it renders the process of deep hierarchy formation automatic, by letting the algorithm determine the new passive dynamics at each stage, as well as the subtasks themselves. The process of subtask discovery is done via non-negative matrix factorization, whereby the matrix of desirability functions, determined by the solution of the LMDPs with exponentiated reward. Since the matrix is non-negative, the authors propose a non-negative factorization into a product of non-negative low rank matrices that capture its structure at a more abstract level of detail. A family of optimization criteria for this process are suggested, based on a subclass if Bregman divergences. Interestingly, the subtasks discovered correspond to distributions over states, rather than single states as in many previous approaches. The authors present several demonstrations of the intuitive decomposition achieved. A nice feature of the present framework is that a fully autonomous scheme (given some assumed parameter values) is demonstrated for constructing the full hierarchical decomposition. \n\nI found this to be an interesting approach to hierarchical multitask learning, augmenting a previous approach with several steps leading to increased autonomy, an essential agent for any learning agent. Both the intuition behind the construction and the application to test problem reveal novel insight. The utilization on the analytic framework of LMDP facilitates understanding and efficient algorithms. \n\nI would appreciate the authors’ clarification of several issues. First, the LMDP does not seem to be completely general, so I would appreciate a description of the limitations of this framework. The description of the elbow-joint behavior around eq. (4) was not clear to me, please expand. The authors do not state any direct or indirect extensions – please do so. Please specify how many free parameters the algorithm requires, and what is a reasonable way to select them. Finally, it would be instructive to understand where the algorithm may fail. \n\n", "We would like to thank the reviewer for their efforts and insightful comments. \n\nSimilarities to other hierarchical discovery methods:\nWhere most other approaches have been used to learn a single level of hierarchy. Our method is distinctive mainly in being able to be iterated repeatedly, forming deep hierarchies. We have extended our discussion of this point in the paper.\n\nThe paper assumes that the multitask Z matrix is given:\nWe assume that Z is given for a basis set of tasks, not for all possible tasks. There are a number reasons that we believe this is not a limiting assumption: \n1.\tThe basis set of tasks can be a tiny fraction of the set of possible tasks in the space. As an example, suppose we consider tasks with boundary rewards at any of two separate locations in an N dimensional world such that there are N-choose-2 possible tasks (corresponding to tasks like “navigate to point A or B”). We require only an N-dimensional Z matrix containing tasks to navigate to each point individually. The resulting subtasks we uncover will aid in solving all of these N-choose-2 tasks. More generally we might consider tasks in which boundary rewards are placed at three or more locations, etc. To know Z therefore means to know an optimal policy to achieve N of ~2^N tasks in a space.\n2.\tWhile we assume knowledge of Z in the paper, we needn’t have a full N-task Z matrix for the method to applied as is. Suppose we had a smaller Z_hat matrix corresponding to M<N tasks. The method would nevertheless find a compressed representation for those M tasks and in so doing uncover useful subtasks. We assume the full Z matrix in the paper so that uncovered subtasks are intuitive decompositions of the full state space. If we consider Z_hat with tasks drawn only from some subsection of the state space, our method would uncover a compressed representation of just the subspace (in this way our method can be said to be task dependent). Similarly if we consider Z_hat with tasks drawn uniformly over the state space we uncover similar decompositions to those presented in the paper.\n3.\tUltimately, in practice we would like to obtain estimates for Z online from experience in a domain. This could be done either directly through Z-iteration (an off-policy value iteration-like update), or by first building a state transition model. Methods to achieve this sort of estimate are well understood. We believe the results presented in this work are a necessary precursor: before tackling the joint estimation of Z and the hierarchy, we wanted to focus solely on inferring the hierarchy, which is in our view a critically challenging aspect of the problem. Just as LMDPs were first solved in batch mode as an eigenvalue problem before developing online z-iteration, we wanted to formulate and solve the computational problem in the batch setting before turning to online learning. Online learning thus is beyond the scope of this paper, though it is a focus of our current work, and we are excited to see this in the near future. \nWe have revised the paper to make this point more strongly.\n\nEquivalence of maximum-variance and kl-divergence in the matrix factorization:\nWe do not believe that the maximum-variance (\\beta=2) and the kl-divergence (\\beta=1) are mathematically equivalent. Instead the intuition for the decomposition scheme came from a maximum variance like argument, but the kl-divergence cost was ultimately chosen in practice to align with the RL objective cost for LMDPs. In practice the method does not appear to be overly sensitive to the choice of \\beta in the range [1,2]. For extreme values of \\beta outside this range results degrade.\n\nThe empirical value of our method:\nThe fact that a task hierarchy can yield efficiency improvements in the multitask setting was shown in (Saxe et al., 2017, ICML). In this instance the hierarchy was, however, hand crafted. More generally, when any ‘good’ hierarchy is provided (one in which new tasks case be represented well within the hierarchy), the learning jump-start is observed.\nA full investigation into the empirical value of the method in the online setting is very interesting, but it is beyond the scope of the present submission.", "Taking the passive dynamics to be the uniform random policy:\nA uniform random policy for the passive dynamics is a common choice in LMDPs which is suitable for a variety of tasks (spatial navigation, trajectory planning, Tower of Hanoi, etc). This is, however, a modeling assumption and there is no requirement that the passive dynamics be derived from the uniform random policy. Choosing some alternate reference policy is possible, and may be suitable for specific tasks. More generally, Todorov (2009) provides a general way of approximating MDPs using LMDPs. We have added citations to a variety of works which show how standard domains have been modeled in the LMDP framework. ", "We would like to thank the reviewer for their efforts and insightful comments. \n\nHow can we estimate Z from data:\nEstimating Z from data in an online RL setting is an important question and is the focus of our current research efforts. The simplest approach is to apply online z-iteration, an off-policy form of value iteration. Z-iteration is a state-based scheme, and because it is off-policy, all tasks in the Z matrix can be updated regardless of which task is currently being executed. An alternative approach is to obtain estimates for the transition model, and then solve for Z. However, we emphasize that the RL setting is not the only possible application of our method: even without online RL, our method allows the automatic discovery of hierarchy and its use in planning in a batch or offline setting. We note that other prior methods like the Bayesian estimation approach of Solway et al., 2014 (“Optimal behavioral hierarchy,” PLoS Comp Bio) or the information theoretic approach of McNamee et al., (“Efficient state-space modularization for planning,”\nNIPS 2016) do not operate in the online RL setting, require full knowledge of the state space and transition model, and operate only in tabular representations. These algorithms have still been fundamental in specifying the computational problem to be solved. Our method goes beyond this prior work most significantly by being able to learn multiple levels of hierarchy, and being more computationally efficient. We believe these features make this work significant for the offline setting. Given that the online RL setting introduces additional variables, we have elected to take a more gradual approach to the development of the method - first ensuring that the new subtask discovery concepts are robust in tabular batch settings before tackling the joint estimation of Z and the hierarchy in high dimensional problems. We are very keen to see an empirical demonstration of online learning in this new framework soon, but it is beyond the scope of the present submission.\n\nVariants of the algorithm to allow for problems with non-tabular state representations:\nThis will certainly be an important extension of the method, and again, is the focus of current work. As it stands, current approaches to deep RL use function approximation over the state space, but keep a tabular representation of tasks. Conversely, our approach is, at present, a tabular representation over states but function approximation over tasks. \nWhen the number of tasks one wishes to perform in some space is significantly smaller than the state space, current approaches seem sensible. On the other hand, when the number of tasks we wish to perform is much greater than the number of states, current approaches appear unlikely to scale well. We believe filling in this possibility—demonstrating how function approximation can be safely used to perform many tasks organized hierarchically—is an important contribution. Ultimately, we must find a way to combine the best of both worlds and do function approximation both in the state and task space, but this is beyond the scope of the present submission.\n\nDiscuss comparisons to Machado et al.:\nThank you for the important pointer, we now include a discussion comparing our approach with that in Machado et al (2017). As noted, while both papers are concerned with options discovery, and utilize matrix factorization tools to achieve this, they have different theoretical foundations and yield different results in practice. Their approach to extend the core concepts therein to a linear function approximation scheme is instructive, and will be useful to our current work.\nThere are several notable differences between these methods. Most importantly, our method can be recursively applied to generate arbitrarily deep control hierarchies, while it is not immediately clear (and there has been no empirical demonstration of) how the approach taken in Machado et al. might achieve a deeper hierarchical architecture, or enable immediate generalization to novel tasks.\nThe methods appear to some extent to be orthogonal (with one supporting function approximation techniques in the state space, and the other supporting deep hierarchies and function approximation in the task space), and thus could potentially be profitably combined. ", "Where does the algorithm fail:\nThis is a great question. We believe the method fails most obviously in domains in which there is no latent structure to abstract. For example, if the passive dynamics (at any level) are fully connected and uniform, then the decomposition delivers no value. While such a problem is degenerate in the base case, it is not yet clear to us under what conditions the recursive iteration of the hierarchical abstraction might at some point yield such a uniform structure (rendering further recursion useless).\n", "We would like to thank the reviewer for their efforts and insightful comments. \n\nLimitations of the LMDP framework:\nThe LMDP framework on the surface appears very different from the standard MDP setting, and the question of its limitations arises frequently. In our view the LMDP framework is in fact quite general, and can be used to solve non-navigational and conceptual tasks such as the TAXI domain, and the Towers of Hanoi problem. For more on the generality of the LMDP see (Saxe et al., ICML 2017 supplementary material), which describes ways in which a variety of tasks have been modeled as LMDPs. The initial work of Todorov, 2009, for instance, gives a method for approximating any MDP with an LMDP. The main limitation of the LMDP framework, so far as we understand it, is that actions must incur costs: the transition cost in LMDPs necessarily has a KL divergence term with respect to the passive dynamics, which is non-negative. Hence, for instance, it must be costly to move from one position in a grid to the next (more precisely, to deviate from the passive dynamics). The LMDP would struggle to model a situation in which actions have strong rewards, eg, where the goal is to take the most circuitous path to a destination. We do not view this as a strong limitation, however, since nearly all domains have a principle of efficient action and it is common to place costs on each action taken in a traditional MDP. Indeed, we would argue that the LMDP exploits this shared structure in nearly all real-world tasks to allow more efficient solutions.\n\nParsing the phrase “elbow-joint behavior”:\nOne of the hyper parameters in our method is the number of nodes/subtasks at each level of the hierarchy. This corresponds to the rank of the decomposition. This choice is akin to choosing the number of neurons at different layers of a NN. Nevertheless we make an observation that may provide a way to establish a good value for even this parameter choice from data. \nThe key idea is that by increasing the rank of the decomposition we monotonically improve the approximation to Z, as the error ||Z-DW|| tends to zero. For some domains there is an obvious inflection point at which increasing the rank of the decomposition only slightly improves the approximation. This suggests a natural trade-off between expressiveness of the hierarchy, and the additional computational effort required to support additional subtasks. When we plot the quality of the approximation, ||Z-DW||, against the decompositions factor k, the observed inflection point is described as exhibiting “elbow-joint” behavior. We have clarified this point in the text.\n\nExtensions and future work:\nWe view this work as being an important stepping stone on the path towards a method for fully online learning of a deep control hierarchy. In that vein, there are a number of natural extensions (a few of which were rightly called out by other reviewers). Some of the major items are: \n1.\tEstimating Z from data (either directly or by learning a transition model), so that the agent can operate completely online\n2.\tIntroducing standard notions of function approximation and compressed state representations to allow the method to scale to high dimensional state spaces\n3.\tIntroducing some concept of regularized nonlinear composition; allowing more complex behavior to be approximated by the hierarchyMany of these items are the focus of our current research efforts. \n\nFree parameters and how to specify them:\nThe number of hyper parameters introduced by our method is minimal. \n1.\tThe number of nodes/subtasks at each level of the hierarchy \n◦\tThis is a common set of hyper parameters for many deep learning applications\n◦\tThe elbow joint behavior provides one possible path to estimate efficient values here from data\n◦\tIn practice we choose the number of nodes at layer (l+1) to be approximately log(|S^l|), where |S^l| is the number of states at the preceding layer. This also determines the number of layers\n2.\tThe subtask transition matrix Pt contains a scaling parameter such that Pt = \\alpha W. Here \\alpha controls how frequently the agent will consult the hierarchy for guidance. In practice we chose \\alpha ~0.2. The intuition here is that it is important that the agent to be able to consult the hierarchy sufficiently frequently that it influences its behavior; but overly frequent access wastes computational resources.\n3.\tThe choice of \\beta in the cost function for the matrix decomposition. In our experiments we have typically chosen \\beta = 1 (KL) or \\beta = 2 (Maximum Variance). All of our experiments suggest that the method is not overly sensitive to choices for \\beta in the range [1,2]. For extreme values of \\beta outside this range results degrade.\n" ]
[ 6, 5, 7, -1, -1, -1, -1, -1 ]
[ 2, 2, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry80wMW0W", "iclr_2018_ry80wMW0W", "iclr_2018_ry80wMW0W", "H16Hn-6lf", "HJf6zwpmf", "HJo-rvwWz", "HJ2ekDp7G", "BkITkWpZM" ]
iclr_2018_rJl63fZRb
Parametrized Hierarchical Procedures for Neural Programming
Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism. Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs. The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability. To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs). A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller. We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs. We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.
accepted-poster-papers
This paper is somewhat incremental on recent prior work in a hot area; it has some weaknesses but does move the needle somewhat on these problems.
train
[ "SylxFWcgG", "SkiyHjDlf", "HyXxfzsxM", "Hk7J-daXf", "SkoXZdamG", "BkMVlJ6mM", "H1mz-p2Xz", "B197SIKXf", "B15VNT1mz", "HJqP76Jmz", "Hy2_fTymG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "I thank the authors for their updates and clarifications. I stand by my original review and score. I think their method and their evaluation has some major weaknesses, but I think that it still provides a good baseline to force work in this space towards tasks which can not be solved by simpler models like this. So while I'm not super excited about the paper I think it is above the accept threshold.\n--------------------------------------------------------------------------\nThis paper extends an existing thread of neural computation research focused on learning resuable subprocedures (or options in RL-speak). Instead of simply input and output examples, as in most of the work in neural computation, they follow in the vein of the Neural Programmer-Interpreter (Reed and de Freitas, 2016) and Li et. al., 2017, where the supervision contains the full sequence of elementary actions in the domain for all samples, and some samples also contain the hierarchy of subprocedure calls.\n\nThe main focus of their work is learning from fewer fully annotated samples than prior work. They introduce two new ideas in order to enable this:\n1. They limit the memory state of each level in the program heirarchy to simply a counter indicating the number of elementary actions/subprocedure calls taken so far (rather than a full RNN embedded hidden/cell state as in prior work). They also limit the subprocedures such that they do not accept any arguments.\n2. By considering this very limited set of possible hidden states, they can compute the gradients using a dynamic program that seems to be more accurate than the approximate dynamic program used in Li et. al., 2017. \n\nThe main limitation of the work is this extremely limited memory state, and the lack of arguments. Without arguments, everything that parameterizes the subprocedures must be in the visible world state. In both of their domains, this is true, but this places a significant limitation on the algorithms which can be modeled with this technique. Furthermore, the limited memory state means that the only way a subprocedure can remember anything about the current observation is to call a different subprocedure. Again, their two evalation tasks fit into this paradigm, but this places very significant limitations on the set of applicable domains. I would have like to see more discussion on how constraining these limitations would be in practice. For example, it seems it would be impossible for this architecture to perform the Nanocraft task if the parameters of the task (width, height, etc.) were only provided in the first observation, rather than every observation. \n\nNone-the-less I think this work is an important step in our understanding of the learning dynamics for neural programs. In particular, while the RNN hidden state memory used by the prior work enables the learning of more complicted programs *in theory*, this has not been shown in practice. So, it's possible that all the prior work is doing is learning to approixmate a much simpler architecture of this form. Specifically, I think this work can act as a great base-line by forcing future work to focus on domains which cannot be easily solved by a simpler architecture of this form. This limitation will also force the community to think about which tasks require a more complicated form of memory, and which can be solved with a very simple memory of this form.\n\n\nI also have the following additional concerns about the paper:\n\n1. I found the current explanation of the algorithm to be very difficult to understand. It's extremely difficult to understand the core method without reading the appendix, and even with the appendix I found the explanation of the level-by-level decomposition to be too terse.\n\n2. It's not clear how their gradient approximation compares to the technique used by Li et. al. They obviously get better results on the addition and Nanocraft domains, but I would have liked a more clear explanation and/or some experiments providing insights into what enables these improvements (or at least an admission by the authors that they don't really understand what enabled the performance improvements).\n", "In the paper titled \"Parameterized Hierarchical Procedures for Neural Programming\", the authors proposed \"Parametrized Hierarchical Procedure (PHP)\", which is a representation of a hierarchical procedure by differentiable parametrization. Each PHP is represented with two multi-layer perceptrons with ReLU activation, one for its operation statement and one for its termination statement. With two benchmark tasks (NanoCraft and long-hand addition), the authors demonstrated that PHPs are able to learn neural programs accurately from smaller amounts of strong/weak supervision. \n\nOverall the paper is well-written with clear logic and accurate narratives. The methodology within the paper appears to be reasonable to me. Because this is not my research area, I cannot judge its technical contribution. ", "Summary of paper: The goal of this paper is to be able to construct programs given data consisting of program input and program output pairs. Previous works by Reed & Freitas (2016) (using the paper's references) and Cai et al. (2017) used fully supervised trace data. Li et al. (2017) used a mixture of fully supervised and weakly supervised trace data. The supervision helps with discovering the hierarchical structure in the program which helps generalization to other program inputs. The method is heavily based on the \"Discovery of Deep Options\" (DDO) algorithm by Fox et al. (2017).\n\n---\n\nQuality: The experiments are chosen to compare the method that the paper is proposing directly with the method from Li et al. (2017).\nClarity: The connection between learning a POMDP policy and program induction could be made more explicit. In particular, section 3 describes the problem statement but in terms of learning a POMDP policy. The only sentence with some connection to learning programs is the first one.\nOriginality: This line of work is very recent (as far as I know), with Li et al. (2017) being the other work tackling program learning from a mixture of supervised and weakly supervised program trace data.\nSignificance: The problem that the paper is solving is significant. The paper makes good progress in demonstrating this on toy tasks.\n\n---\n\nSome questions/comments:\n- Is the Expectation-Gradient trick also known as the reinforce/score function trick?\n- This paper could benefit from being rewritten so that it is in one language instead of mixing POMDP language used by Fox et al. (2017) and program learning language. It is not exactly clear, for example, how are memory states m_t and states s_t related to the program traces.\n- It would be nice if the experiments in Figure 2 could compare PHP and NPL on exactly the same total number of demonstrations.\n\n---\n\nSummary: The problem under consideration is important and experiments suggest good progress. However, the clarity of the paper could be made better by making the connection between POMDPs and program learning more explicit or if the algorithm was introduced with one language.", "We agree that the evidence for (1) does not disambiguate the two causes well enough to support (1) with high confidence, and updated the paper to make this even clearer.\n\nWe would also like to clarify the contributions of the paper:\n1) We introduce the PHP model, which is simpler to optimize than the NPI model. That is, for a given dataset, the PHP model induces an optimization landscape in which a good solution is easier to find.\n2) We propose an EG algorithm that computes exact gradients in this optimization landscape, allowing efficient optimization.\n3) We show empirically that our model and algorithm outperform baseline models and algorithms.\n\nThis work does not show that the approximate gradients of NPL are worse than exact gradients in optimizing the NPI model (with weak supervision), although this may well be the case when the NPL execution-path grouping loses useful path information. This work also does not show that the exact gradients of the full-batch EG algorithm are always better than approximate gradients; in fact, using SGD via minibatch EG may well be better (see e.g. [Keskar et al. 2017]). Finally, this work does not fully tease apart whether the gains are due to the PHP model inducing simpler optimization landscapes, or due to the EG algorithm utilizing them better, or both, although there is evidence that the PHP model is easier to optimize.\n\nThese are all exciting research questions, and we thank the reviewer again for raising them. We believe that the simple and interpretable PHP model, the useful EG method, and the compelling empirical results presented here would be valuable to the community as a stepping stone towards such future research.\n\n\n[Keskar et al. 2017] On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, ICLR 2017", "This is an overview of the main changes we introduced in revisions of the paper:\n\n- In Sections 1 and 2.1, we removed the somewhat irrelevant discussion of the challenges of general RNNs.\n- We reorganized Table 1.\n- Based on feedback from Reviewer #3, in Section 3 we clarified the connection between the POMDP formulation and program learning.\n- Based on feedback from Reviewers #2 and #3, in Section 4.1.1 we clarified the connection between the call stack and the agent memory, and noted basic properties of the PHP model complexity.\n- Based on feedback from Reviewer #2, in Section 4.2.2 (formerly 4.2) we clarified the EG algorithm; in Section 4.2.3 (formerly 4.2.1) we clarified the level-wise training algorithm and updated the notation of layer indexes for consistency.\n- Based on feedback from Reviewer #2, in Section 5.1 (Results) we addressed the causes of PHP gains; we removed the discussion of weak supervision which is repeated in Section 6.\n- We removed Figure 3, which was somewhat redundant with Figure 1.\n- Based on feedback from Reviewer #2, in Section 6 we addressed limitations of the PHP model and the need for more complicated benchmarks in this field.\n- We made multiple minor clarifications and style improvements.", "I agree with your evidence for point (2). However I don't see how your evidence for point (1) disambiguates between the two causes. Couldn't it just as well be the case that the 70% increase of PHP over NPL is due to the fact that PHP is using a simpler model that is easier to optimize?\n", "Thank you for making this excellent point.\n\nOur experiments indicate that the gains of PHP are due to both (1) the ability to compute exact gradients for weakly supervised demonstrations (via the EG algorithm), and (2) the PHP model being easier to optimize than the NPI model. We added this observation to Section 5.1.\n\nAs evidence for (2), consider the case of strongly supervised demonstrations, where NPL coincides with NPI and takes exact gradients. As shown in Figure 2 (blue curves at the 64 mark), with 64 strongly supervised demonstrations in the NanoCraft domain, the accuracy is 1.0 for PHP; 0.724 for NPL/NPI. In this case, PHP has lower sample complexity with comparable optimization algorithms, suggesting that this domain is solvable with a PHP model of lower complexity than the NPI model. We note, however, that Li et al. (2017) used batch size 1, whereas we used full batches and made no attempt to optimize the batch size.\n\nAs evidence for (1), consider the case where 48 of the 64 demonstrations are weakly supervised (Figure 2, blue curves at the 16 mark). Here the success rate is 0.969 for PHP; 0.502 for NPL. Compared to the strongly supervised case above, this 70% increase in the gain of PHP over NPL is likely due to the exact gradients used to optimize the PHP model, in contrast to the approximate gradients of NPL.\n\nWe are excited to present these results as they suggest a number of new research question, such as the effect of optimizing PHP with stochastic gradients, and we thank the reviewer for inspiring this direction for future research.", "Thanks for your response. The clarifications to Section 4.2 make the level-wise training algorithm more clear.\n\nThe additional information in section 2 makes it clear how the gradient compuation differs, but it does not clarify where the gains come from. Specifically, from the current results, it's not clear whether the gains come from (1) the ability to compute exact gradients rather than the approximate gradient computation used by Li et. al, or (2) the simpler PHP model is just easier to optimize in general, so it will work better regardless of the technique used for gradient computation.\n", "Thank you for these constructive comments.\n\nWe added to Section 3 clarification of the connection between the POMDP formulation and program learning. In particular, the state s_t of the POMDP models the configuration of the computer (e.g., the tapes and heads of a Turing Machine, or the RAM of a register machine), whereas the memory m_t of the agent models the internal state of the machine itself (e.g. the state of a Turing Machine's Finite State Machine, or the registers of a register machine).\n\nThe Expectation–Gradient method is somewhat similar to but distinct from the REINFORCE trick, which uses the so-called “log-gradient” identity \\nabla_\\theta{p_\\theta(x)} = p(x) \\nabla_\\theta{\\log p(x)} to compute \\nabla_\\theta{E_p[f(x)]}. In fact, we use that same identity twice to compute \\nabla_\\theta{\\log P(\\xi | \\theta)}: once to express the gradient of log P(xi | theta) using the gradient of P(xi | theta); then after introducing the sum over zeta, we use the identity again in the other direction to express this using the gradient of log P(zeta, xi | theta).\n\nWe added to Section 5.1 clarification that we did use the same total number of demonstrations for PHP as was used for NPL. The results for 64 demonstrations are shown in Figure 2, and the results for PHP with 128 and 256 demonstrations were essentially the same as with 64, and were omitted for figure clarity.", "Thank you for this valuable and detailed feedback.\n\nYou are correct in pointing out that PHPs impose a constraining memory structure, and we added to Sections 1 and 6 notes on their limitations. In principle, any finite memory structure can be implemented with sufficiently many PHPs, by having a distinct procedure for each memory state. Specifically in NanoCraft, PHPs can remember task parameters by calling a distinct sub-procedure for each building location and size. This lacks generalization, which was also not shown for NanoCraft by Li et al. (2017). We expect the generalization achieved by limiting the number of procedures to be further enhanced by allowing them to depend on a program counter.\n\nThis paper thus makes an important first step towards neural programming with structural constraints that are both useful as an inductive bias that improves sample complexity, and computationally tractable. We agree that more expressive structures will be needed as the field moves beyond the current simple benchmarks, which we hope this work promotes. We agree that passing arguments to hierarchical procedures is an important extension to explore in future work.\n\nWe clarified in Section 4.2 and in the Appendix the explanations of the algorithm and of the level-wise training procedure. Specifically, in Section 4.2 we elaborated on the structure of the full likelihood P(zeta, xi | theta) as a product of the relevant PHP operations, and how this leads to the given gradient expression; and clarified the expression for sampling from the posterior P(zeta | xi, theta) in level-wise training.\n\nWe added in Section 2 a short comparison of our method to that of Li et al. (2017). The main difference is that their method computes approximate gradients by averaging selectively over computation paths, whereas our method computes exact gradients using dynamic programming, enabled by having small discrete latent variables in each time step.", "Thank you for your time and for your assessment. We are very excited about these results and are making updates to improve the paper." ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 1, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJl63fZRb", "iclr_2018_rJl63fZRb", "iclr_2018_rJl63fZRb", "BkMVlJ6mM", "iclr_2018_rJl63fZRb", "H1mz-p2Xz", "B197SIKXf", "HJqP76Jmz", "HyXxfzsxM", "SylxFWcgG", "SkiyHjDlf" ]
iclr_2018_S1D8MPxA-
Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio
Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.
accepted-poster-papers
The paper proposes a new sparse matrix representation based on Viterbi algorithm with high and fixed index compression ratio regardless of the pruning rate. The method allows for faster parallel decoding and achieves improved compression of index data storage requirement over existing methods (e.g., magnitude-based pruning) while maintaining the pruning rate. The quality of paper seems solid and of interest to a subset of the ICLR audience.
train
[ "BJle65dxG", "Hkiuu4PlM", "SktHYC_xM", "rJJvOXZ7M", "S1SOzm-mM", "S1_WfmZQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes VCM, a novel way to store sparse matrices that is based on the Viterbi Decompressor. Only a subset of sparse matrices can be represented in the VCM format, however, unlike CSR format, it allows for faster parallel decoding and requires much less index space. The authors also propose a novel method of pruning of neural network that constructs an (sub)optimal (w.r.t. a weight magnitude based loss) Viterbi-compressed matrix given the weights of a pretrained DNN.\nVCM is an interesting analog to the conventional CSR format that may be more computationally efficient given particular software and/or hardware implementations of the Viterbi Decompressor. However, the empirical study of possible acceleration remains as an open question.\nHowever, I have a major concern regarding the efficiency of the pruning procedure. Authors report practically the same level of sparsity, as the pruning procedure from the Deep Compression paper. Both the proposed Viterbi-based pruning, and Deep Compression pruning belong to the previous era of pruning methods. They separate the pruning procedure and the training procedure, so that the model is not trained end-to-end. However, during the last two years a lot of new adaptive pruning methods have been developed, e.g. Dynamic Network Surgery, Soft Weight Sharing, and Sparse Variational DropOut. All of them in some sense incorporate the pruning procedure into the training procedure and achieve a much higher level of sparsity (e.g. DC achieves ~13x compression of LeNet5, and SVDO achieves ~280x compression of the same network). Therefore the reported 35-50% compression of the index storage is not very significant.\nIt is not clear whether it is possible to take a very sparse matrix and transform it into the VCM format without a high accuracy degradation. It is also not clear whether the VCM format would be efficient for storage of extremely sparse matrices, as they would likely be more sensitive to the mismatch of the original sparsity mask, and the best possible VCM sparsity mask. Therefore I’m concerned whether it would be possible to achieve a close-to-SotA level of compression using this method, and it is not yet clear whether this method can be used for practical acceleration or not.\nThe paper presents an interesting idea that potentially has useful applications, however the experiments are not convincing enough.", "It seems like the authors have carefully thought about this problem, and have come up with some elegant solutions, but I am not sold on whether it's an appropriate match for this conference, mainly because it's not clear how many machine learning people will be interested in this approach.\n\nThere was a time about 2 or 3 years ago when sparse-matrix approaches seemed to have a lot of promise, but I get the impression that a lot of people have moved on. The issue is that it's hard to construct a scenario where it makes sense from a speed or memory standpoint to do this. The authors seem to have found a way to substantially compress the indexes, but it's not clear to me that this really ends up solving any practical problem. Towards the end of the paper I see mention of a 38.1% reduction in matrix size. That is way too little to make sense in any practical application, especially when you consider the overhead of decompression. It seems to me that you could easily get a factor of 4 to 8 of compression just by finding a suitable way to encode the floating-point numbers in many fewer bits (since the weight parameters are quite Gaussian-distributed and don't need to be that accurate).\n", "quality: this paper is of good quality\nclarity: this paper is very clear but contains a few minor typos/grammatical mistakes (missing -s for plurals, etc.)\noriginality: this paper is original\nsignificance: this paper is significant\n\nPROS\n- Using ECC theory for reducing the memory footprint of a neural network seems both intuitive and innovative, while being grounded in well-understood theory.\n- The authors address a consequence of current approaches to neural network pruning, i.e., the high cost of sparse matrix index storage.\n- The results are extensive and convincing.\n\nCONS\n- The authors mention in the introduction that this encoding can speed up inference by allowing efficient parallel sparse-to-dense matrix conversion, and hence batch inference, but do not provide any experimental confirmation.\n\nMain questions\n- It is not immediately clear to me why the objective function (2) correlates to a good accuracy of the pruned network. Did you try out other functions before settling on this one, or is there a larger reason for which (2) is a logical choice? \n- On a related note, I would find a plot of the final objective value assigned to a pruning scheme compared to the true network accuracy very helpful in understanding how these two correlate.\n- Could this approach be generalized to RNNs?\n- How long does the Viterbi pruning algorithm take, as it explores all 2^p possible prunings?\n- How difficult is it to tune the pruning algorithm hyper-parameters?", "We thank you for your feedback and comments. We address your concerns and questions below.\n\n“It is not immediately clear to me why the objective function (2) correlates to a good accuracy of the pruned network. Did you try out other functions before settling on this one, or is there a larger reason for which (2) is a logical choice?”\n \nWe tried various objective functions such as x, x^2, exp(x), tanh(x) and sigmoid(x) as shown in Fig. 9. While all of the objective functions gave comparable accuracy, we chose tanh(x) because, in the context of magnitude-based pruning, selecting a few parameters with large magnitude is more desirable than choosing many parameters with medium magnitude, even though the sum of the (survived parameters’) magnitude can be the same in both cases. Hence, in order to obtain a highly skewed distribution of survived parameters, ‘tanh’ is considered in our experiments. We added Appendix A.3 to show how ‘tanh’ and ‘x’ can produce different branch metric values, given a list of parameters and comparator outputs. Please note that there can be many other objective functions that potentially result in similar/better skewed distributions.\n\n“I would find a plot of the final objective value assigned to a pruning scheme compared to the true network accuracy very helpful in in understanding how these two correlate.”\n\nWe hope our response above addresses this concern as well. Since the Viterbi algorithm finds an optimal sequence maximizing the path metric, there is no immediate relationship between the branch metric (i.e., Eq.(2)) and the network accuracy. However, producing high branch metrics would increase the chance to improve the final path metric and to select parameters with large magnitude after a sequence-level optimization.\nAt the final time index, all path metrics share a long optimal sequence through the survivor selection procedure (as a result, it is not necessary to investigate 2^p paths). Choosing any path metric at the final time index, hence, would produce almost the same accuracy. Please note that comparing absolute values of the final path metrics with different branch metric equations would be meaningless due to the path metric normalization.\n\n“Could this approach be generalized to RNNs?”\n\nYes, our proposed approach can be generalized to RNN for which a pruning is applicable. For instance, we performed an experiment with a language model on PTB dataset (medium size, https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb). The magnitude-based pruning and the Viterbi-based pruning obtain 81.4% and 81.5% pruning rates, respectively, while VCM has a storage reduction of 43.9% compared with CSR and the perplexity degradation is comparable between the two cases. Since there are no widely accepted benchmarks for RNN pruning experiments, we did not include this experimental result in the manuscript. \n\n“How long does the Viterbi algorithm take, as it explores all 2^p possible prunings?”\n\nDue to the dynamic programming property of the Viterbi algorithm, it is not necessary to compute all 2^p possible prunings. The time complexity of the Viterbi algorithm is linearly proportional to the number of states in the trellis diagram and p (not 2^p). There are many well-known implementation techniques to reduce the time complexity of the Viterbi algorithm, such as a sliding window technique (e.g., the time complexity becomes independent of the length of the input sequence), which could not be included in our manuscript due to the space limit.\n\n“How difficult is it to tune the pruning algorithm hyper-parameters?”\n\nIn our manuscript, where a magnitude-based pruning is used as a baseline, the difficulty of tuning hyper-parameters using our proposed methodology is almost the same as that of the magnitude-based pruning, because other than VH_p, selecting hyper-parameters becomes trivial. For example, increasing NUM_c, XOR taps, and the Hamming distance always improves the pruning rate and the compression rate (while typical numbers for those hyper-parameters introduced in the paper are good enough). The comparator threshold (TH_c) is automatically determined by the target pruning rate, which is dominated by TH_p. Finding an appropriate VH_p value follows the way of magnitude-based pruning methods. \n", "Thank you very much for the constructive comments. We added Appendix A.4 to show that SVDO can be combined with our proposed VCM format.\n\nWe believe that our proposed technique is a general one and can be combined with SVDO since our proposed Viterbi encoder/algorithm is basically not dedicated to specific pruning methods (rather, we wanted to suggest a new sparse matrix format which is better suited to DNN). As shown in the Table 6 and 7 in Appendix A.4, for MNIST dataset, we could achieve a competitive pruning rate and memory footprint reduction by applying our proposed Viterbi algorithm to the sparse matrix produced from SVDO scheme after removing fully disconnected neurons. We hope Appendix A.4 addresses your concern that VCM may not be able to handle such highly sparse matrices.\n \nIn the original draft, one of our main experimental study was to show pruning results for a large benchmark such as ImageNet database (or something of a similar scale) and please note that we still compare the pruning results with S. Han et al.’s method for ImageNet as SVDO did not report ImageNet results.\n", "We thank the AnonReviewer#1 for taking the time to review this paper.\n\nWe agree with the reviewer that quantization is another effective scheme to reduce memory footprint. We, however, believe that the pruning and quantization are two orthogonal approaches that both aim to help reducing the memory/computation overhead as demonstrated in S. Han et al.’s Deep Compression paper, for example. To answer the reviewer’s question in more detail, we also experimented quantization of weight values for the viterbi-pruned LeNet-5 on MNIST dataset. Similar to S. Han et al.’s case, we observed that 4-bit weight quantization could maintain the inference accuracy, which confirms that pruning and quantization can be applied together and further reduce the memory footprint. \n\nIn addition, recent works, such as variational dropout (as discussed by reviewer #2), show that it is possible to prune and compress a large model(i.e., vgglike) by more than 50x without loss of accuracy. The results indicate that sparse-matrix approaches still have merits for further investigation. As we have shown in response to reviewer #2, the VCM format can also work with this approach (in fact, we believe VCM can work with any underlying pruning approach).\n\nRegarding the reviewer’s concern for VCM decompression overhead, we believe that the overhead is very small because it requires small number of FFs and XOR gates as shown in the Figure 1 and Table 4. Please note that VCM decompression corresponds to the Viterbi encoding in error correction code applications, which is regarded as a much lighter process compared to Viterbi decoding.\n" ]
[ 6, 6, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_S1D8MPxA-", "iclr_2018_S1D8MPxA-", "iclr_2018_S1D8MPxA-", "SktHYC_xM", "BJle65dxG", "Hkiuu4PlM" ]
iclr_2018_ByS1VpgRZ
cGANs with Projection Discriminator
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator.
accepted-poster-papers
The paper proposes a simple modification to conditional GANs, where the discriminator involves an inner product term between the condition vector y and the feature vector of x. This formulation is reasonable and well motivated from popular models (e.g., log-linear, Gaussians). Experimentally, the proposed method is evaluated on conditional image generation and super-resolution tasks, demonstrating improved qualitative and qualitative performance over the existing state-of-the-art (AC-GAN).
val
[ "BJSIkW61f", "rkyTeFweM", "SynfBlcgf", "rk3jNQPGf", "HJ-udwIGf", "BJERLD8fz", "BJYUqi--G", "Byeml2ZWM", "HJd7sibbz", "ByeD_ibZz", "HJP0Zr3ef", "r1-ygQixf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "\nI thank the authors for the thoughtful response and updated manuscript. After reading through both, my review score remains unchanged.\n\n=================\n\nThe authors describe a new variant of a generative adversarial network (GAN) for generating images. This model employs a 'projection discriminator' in order to incorporate image labels and demonstrate that the resulting model outperforms state-of-the-art GAN models.\n\nMajor comments:\n1) Spatial resolution. What spatial resolution is the model generating images at? The AC-GAN work performed an analysis to assess how information is being introduced at each spatial resolution by assessing the gains in the Inception score versus naively resizing the image. It is not clear how much the gains of this model is due to generating better lower resolution images and performing simple upscaling. It would be great to see the authors address this issue in a serious manner.\n\n2) FID in real data. The numbers in Table 1 appear favorable to the projection model. Please add error bars (based on Figure 4, I would imagine they are quite large). Additionally, would it be possible to compute this statistic for *real* images? I would be curious to know what the FID looks like as a 'gold standard'.\n\n3) Conditional batch normalization. I am not clear how much of the gains arose from employing conditional batch normalization versus the proposed method for incorporating the projection based discriminator. The former has been seen to be quite powerful in accomodating multi-modal tasks (e.g. https://arxiv.org/abs/1709.07871, https://arxiv.org/abs/1610.07629\n). If the authors could provide some evidence highlighting the marginal gains of one technique, that would be extremely helpful.\n\nMinor comments:\n- I believe you have the incorrect reference for conditional batch normalization on Page 5.\nA Learned Representation For Artistic Style\nDumoulin, Shlens and Kudlur (2017)\nhttps://arxiv.org/abs/1610.07629\n\n- Please enlarge images in Figure 5-8. Hard to see the detail of 128x128 images.\n\n- Please add citations for Figures 1a-1b. Do these correspond with some known models?\n\nDepending on how the authors respond to the reviews, I would consider upgrading the score of my review.", "The paper proposes a simple modification to conditional GANs, obtaining impressive results on both the quality and diversity of samples on ImageNet dataset. Instead of concatenating the condition vector y to the input image x or hidden layers of the discriminator D as in the literature, the authors propose to project the condition y onto a penultimate feature space V of D (by simply taking an inner product between y and V) . This implementation basically restricts the conditional distribution p(y|x) to be really simple and seems to be posing a good prior leading to great empirical results.\n\n+ Quality:\n- Simple method leading to great results on ImageNet!\n- While the paper admittedly leaves theoretical work for future work, the paper would be much stronger if the authors could perform an ablation study to provide readers with more intuition on why this work. One experiment could be: sticking y to every hidden layer of D before the current projection layer, and removing these y's increasingly and seeing how performance changes.\n- Appropriate comparison with existing conditional models: AC-GANs and PPGNs.\n- Appropriate (extensive) metrics were used (Inception score/accuracy, MS-SSIM, FID)\n\n+ Clarity:\n- Should explicitly define p, q, r upfront before Equation 1 (or between Eq1 and Eq2).\n- PPG should be PPGNs.\n\n+ Originality:\nThis work proposes a simple method that is original compared existing GANs.\n\n+ Significance:\nWhile the contribution is significant, more experiments providing more intuition into why this projection works so well would make the paper much stronger.\n\nOverall, I really enjoy reading this paper and recommend for acceptance!\n\n\n\n\n", "This manuscript makes the case for a particular parameterization of conditional GANs, specifically how to add conditioning information into the network. It motivates the method by examining the form of the log density ratio in the continuous and discrete cases.\n\nThis paper's empirical work is quite strong, bringing to bare nearly all of the established tools we currently have for evaluating implicit image models (MS-SSIM, FID, Inception scores). \n\nWhat bothers me is mostly that, while hyperparameters are stated (and thank you for that), they seem to be optimized for the candidate method rather than the baseline. In particular, Beta1 = 0 for the Adam momentum coefficient seems like a bold choice based on my experience. It would be an easier sell if hyperparameter search details were included and a separate hyperparameter search were conducted for the candidate and control, allowing the baseline to put its best foot forward.\n\nThe sentence containing \"assume that the network model can be shared\" had me puzzled for a few minutes. I think what is meant here is just that we can parameterize the log density ratio directly (including some terms that belong to the data distribution to which we do not have explicit access). This could be clearer.", "We are sorry but we forgot to note the reference information of \"Gulrajani et al. (2017)\" in the previous comment.\n\nReference:\nImproved training of Wasserstein GANs\nIshaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin and Aaron Courville\nIn NIPS2017", "We reflected your suggestion on our revision and conducted the hyper-parameter search on CIFAR-100 about the Adam hyper-parameters (learning rate $\\alpha$ and 1st order momentum $\\beta_1$).\nNamely, we varied each one of these parameters while keeping the other constant, and reported the inception scores for all methods including several versions of “concat” architectures to compare. \nMore specifically, we tested with concatenation module introduced at (a) input layer, (b) hidden layer, and at (c) output layer. The results of this complementary experiment are now provided in the of the appendix section A of the revised paper.\n\nAs we can see in Figure 11, our “projection” architecture excelled over all other architectures for all choice of the hyper-parameters, and achieved the inception score of 9.53. Meanwhile, concat architectures were able to achieve all 8.82 at most. \nThe best concat model in term of the inception score on CIFAR-100 was the hidden concat with $\\alpha$=0.0002 and $\\beta_1$ = 0, which turns out to be the very choice of the parameters we picked for the original ImageNet experiment. Unfortunately, we were not able to secure the time for the parameter search on ImageNet experiment. However, from the way the outcomes look for the CIFAR-100, we speculate the same to happen. \n", "We owe great thanks to all reviewers for helpful comments to improve our manuscripts. \nWe revised our manuscript based on the reviewer’s comments and uploaded the revision. \nFor an important note, we re-calculated the inception scores and FID with the original evaluation code written in TensorFlow, because the results slightly differed from our rendition written in Chainer. \nPlease rest assured, however, because the newly computed results does not affect any claims we have made on the original version of our paper.\n\nAlso, to show the efficacy of our method on smaller benchmark datasets, we added the results on CIFAR-10 and CIFAR-100 datasets. \nOur projection model was able to eclipse the comparative models (concat discriminator models and AC-GANs) on these datasets as well. \nPlease see Appendix A for details. \n", "\n>What bothers me is mostly that, while hyperparameters are stated (and thank you for that), they seem to be optimized for the candidate method rather than the baseline. In particular, Beta1 = 0 for the Adam momentum coefficient seems like a bold choice based on my experience. \n\nWe did not perform any hyper-parameter optimization for the Adam optimizer and the number of critic updates, etc… \nWe just used the same hyper-parameters used in Gulrajani et al. (2017, https://github.com/igul222/improved_wgan_training/blob/master/gan_cifar_resnet.py ), because we adopted the practically the same architecture used in the very paper. \nWe must admit that we simply could not spare enough time for the parameter search for the ImageNet experiments. However, we plan to do the search for (beta1, alpha of Adam) on CIFAR 10 or CIFAR 100 and compare the performance against \"AC-GANs\", \"concat\" and \"projection\".\n\n\n>The sentence containing \"assume that the network model can be shared\" had me puzzled for a few minutes. I think what is meant here is just that we can parameterize the log density ratio directly (including some terms that belong to the data distribution to which we do not have explicit access). This could be clearer.\n\nThank you very much! We concur with you in your views and we will reflect the suggestion on this part of the revision. \n \n\n", "\n> 1) Spatial resolution. What spatial resolution is the model generating images at?\n \nWe are sorry for the lack of the spatial resolution information. The model generates at \"128x128\" spatial resolution. \n\n\n> 1) Spatial resolution, 3) Conditional batch normalization,\n\nThe goal of our paper is to show the efficacy of our \"projection\" model for the discriminator, so all our experiments use same architecture for the generator. In all our experiments, we are equipping the generator with conditional BN. This includes our experiments with \"AC-GANs\", as well as \"concat\" model and \"projection\" model. \nWe can indeed explore the same result with generators equipped with a different way to introduce the conditional information (such as label concatenation); however, we intend not to make this (generator structure) the focus of our paper. On the base of our theoretical motivations, (Section 3) we also believe that our way will perform well even with a different way of label conditionalization of the generator. \nWe would also like to emphasize that our projection model is prevailing on the super-resolution task as well, suggesting that our success is not the model and task specific. \nLastly, as interesting as it is, we would not find a way to include the dependence of the performance on the image resolution into the scope of our paper. \n\n\n>2) \n>FID in real data. The numbers in Table 1 appear favorable to the projection model. Please add error bars (based on Figure 4, I would imagine they are quite large).\n\nWe are sorry, but we are little confused about this suggestion. First of all, we are dealing with images from different classes in our experiments. The difficulty of image generation differs across each class, and the intra FID shall depend on the dataset of each class. We, therefore, found no particular need for showing the size of its variance (error bar) in this experiment. The goal of our Figure 4 here is to simply show that our projection method outperforms \"concat\" and \"AC-GANs\" on \"most of the classes\", and we felt it more appropriate to visualize our claim with scatter plot. \n\n\n>Additionally, would it be possible to compute this statistic for *real* images? I would be curious to know what the FID looks like as a 'gold standard.'\n\nPlease take a look at the definition of FID(p5, (Heusel et al., 2017)) . FID is a measure of a difference between two distributions. If there are infinitely many 'real' images, the FID between 'real' images against 'real' images is trivially 0. In our paper, we are comparing the empirical distribution of generated samples over 5000 samples against the that of the training 'real' images. If we compute the empirical distribution of 'real' images against another empirical distribution of the 'real' images, we are bound to observe some nonzero FID value. However, we find no particular importance in computing such value. \n\n\n> Minor comments:\n>- I believe you have the incorrect reference for conditional batch normalization on Page 5.\n>- Please enlarge images in Figure 5-8. Hard to see the detail of 128x128 images.\n>- Please add citations for Figures 1a-1b. Do these correspond with some known models?\n\nThanks for pointing out the incorrect references! We would revise the designated citations accordingly. We would also like to modify the figure images to improve the visuality.\n", "We are very glad to hear that you enjoy our manuscript!\n\n> While the paper admittedly leaves theoretical work for future work, the paper would be much stronger if the authors could perform an ablation study to provide readers with more intuition on why this work. One experiment could be: sticking y to every hidden layer of D before the current projection layer, and removing these y's increasingly and seeing how performance changes.\n>While the contribution is significant, more experiments providing more intuition into why this projection works so well would make the paper much stronger.\n\nAblation study was in fact a vexing issue in our paper, and we are still unsure of a way to theoretically back up our results. We may attempt your suggestion, and meanwhile continue looking for still other convincing experiments.\n\n\n> Should explicitly define p, q, r upfront before Equation 1 (or between Eq1 and Eq2).\n> PPG should be PPGNs.\n\nThanks for pointing out the mistakes! We will make changes accordingly in the revised version.\n \n", "We thank all three reviewers for thorough reading of our manuscript and their comments and suggestions.\nWe responded to all the suggestions and made corrections for each reviewer’s comment separately.", "We were clearly making typos in the reference. \nThe reference you mentioned is the very reference we intended to cite. \nAs for the use of the word “FiLM”, we would like to stick for now to the “conditional batch normalization” to make it easy for the readers to readily catch the framework of our algorithm. ", "In the paper, the authors use Conditional Batch Normalization and refer to the following paper:\nVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. In ICLR, 2017.\nAlthough this paper is related to adversarial learning, it is not related to Conditional Batch Normalization.\n\nI believe there may be some confusion in the references. The following papers may be more relevant as they both introduce Conditional Normalization in different contexts:\nDumoulin, V., Shlens, J., and Kudlur, M. A learned representation for artistic style. In ICLR, 2017.\nde Vries, H., Strub, F., Mary, J., Larochelle, H., Pietquin, O., and Courville, A. Modulating early visual processing by language. In NIPS, 2017.\n\nInterestingly, subsequent work has shown that the effect of this form of conditioning can be decorrelated from normalization layers, thus referring to the method as Feature-wise Linear Modulation, or FiLM:\nPerez E., Strub F., de Vries H., Dumoulin V., Courville A. FiLM: Visual Reasoning with a General Conditioning Layer. In AAAI, 2018.\n\nIt may also be worthwhile to consider updating the name used in the paper from Conditional Batch Normalization to FiLM, to follow the latest literature on this method.\n" ]
[ 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByS1VpgRZ", "iclr_2018_ByS1VpgRZ", "iclr_2018_ByS1VpgRZ", "BJYUqi--G", "SynfBlcgf", "iclr_2018_ByS1VpgRZ", "SynfBlcgf", "BJSIkW61f", "rkyTeFweM", "iclr_2018_ByS1VpgRZ", "r1-ygQixf", "iclr_2018_ByS1VpgRZ" ]
iclr_2018_S1v4N2l0-
Unsupervised Representation Learning by Predicting Image Rotations
Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4%$that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: https://github.com/gidariss/FeatureLearningRotNet
accepted-poster-papers
The paper proposes a new way of learning image representations from unlabeled data by predicting the image rotations. The problem formulation implicitly encourages the learned representation to be informative about the (foreground) object and its rotation. The idea is simple, but it turns out to be very effective. The authors demonstrate strong performance in multiple transfer learning scenarios, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.
train
[ "HJ91THweG", "HyCI-CKeG", "BywXN7WMG", "BJDsmT7NM", "S1XsOz6fM", "SyOCqz6fz", "H1TdLfaff", "SJhoHzTff", "SJsHNf6ff", "r1Tf0Z6fM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The paper proposes a simple classification task for learning feature extractors without requiring manual annotations: predicting one of four rotations that the image has been subjected to: by 0, 90, 180 or 270º. Then the paper shows that pre-training on this task leads to state-of-the-art results on a number of popular benchmarks for object recognition, when training classifiers on top of the resulting representation.\n\nThis is a useful discovery, because generating the rotated images is trivial to implement by anyone. It is a special case of the approach by Agrawal et al 2015, with more efficiency.\n\nOn the negative side, this line of work would benefit from demonstrating concrete benefits. The performance obtained by pre-training with rotations is still inferior to performance obtained by pre-training with ImageNet, and we do have ImageNet so there is no reason not to use it. It would be important to come up with tasks for which there is not one ImageNet, then techniques such as that proposed in the paper would be necessary. However rotations are somewhat specific to images. There may be opportunities with some type of medical data.\n\nAdditionally, the scope of the paper is a little bit restricted, there is not that much to take home besides the the following information: \"predicting rotations seems to require a lot of object category recognition\".\n\n\n\n", "Strengths:\n* Very simple strategy for unsupervised learning of deep image features. Simplicity of approach is a good quality in my view.\n* The rationale for the effectiveness of the approach is explained well.\n* The representation learned from unlabeled data is shown to yield strong results on image categorization (albeit mostly in scenarios where the unsupervised features have been learned from the *same* dataset where classification is performed -- more on this below).\n* The image rotations are implemented in terms of flipping and transposition, which do not create visual artifacts easily recognizable by deep models.\n\nWeaknesses:\n* There are several obvious additional experiments that, in my view, would greatly strengthen this work:\n1. Nearly all of the image categorization results (with the exception of those in Table 4) are presented for the contrived scenario where the unsupervised representation is learned from the same training set as the one used for the final supervised training of the categorization model. This is a useless application scenario. If labels for the training examples are available, why not using them for feature learning given that this leads to improved performance (see results in Tables)? More importantly, this setup does not allow us to understand how general the unsupervised features are. Maybe they are effective precisely because they have been learned from images of the 10 classes that the final classifier needs to distinguish... I would have liked to see some results involving unsupervised learning from a dataset that may contain classes different from those of the final test classification or, even better, from a dataset of randomly selected images that lack categorical coherence (e.g., photos randomly picked from the Web, such as Flickr pics).\n2. In nearly all the experiments, the classifier is built on top of the frozen unsupervised features. This is in contrast with the common practice of finetuning the entire pretrained unsupervised net on the supervised task. It'd be good to know why the authors opted for the different setup and to see in any case some supervised finetuning results.\n3. It would be useful to see the accuracy per class both when using unsupervised features as well as fully-supervised features. There are many objects that have a canonical pose/rotation in the world. Forcing the unsupervised features to distinguish rotations of such objects may affect the recognition accuracy for these classes. Thus, my request for seeing how the unsupervised learning affects class-specific accuracy.\n4. While the results in Table 2 are impressive, it appears that the different unsupervised learning methods reported in this table are based on different architectures. This raises the question of whether performance gains are due to the better mechanism for unsupervised learning or rather the better network architecture.\n5. I do understand that using only 0, 90, 180 and 270 degree rotations eliminates the issue of potentially recognizable artifacts. Nevertheless, it'd be interesting to see what happens empirically when the number of discrete rotations is increased, e.g., by including 45, 135, 225 and 315 degree rotations. And what happens if you use only 0 and 180? Or only 90 and 270?\n* While the paper is easy to understand, at times the writing is poor and awkward (e.g., opening sentence of intro, first sentence in section 2.2).", "**Paper Summary**\n This paper proposes a self-supervised method, RotNet, to learn effective image feature from images by predicting the rotation, discretized into 4 rotations of 0, 90, 180, and 270 degrees. The authors claim that this task is intuitive because a model must learn to recognize and detect relevant parts of an image (object orientation, object class) in order to determine how much an image has been rotated. \nThey visualize attention maps from the first few conv layers and claim that the attend to parts of the image like faces or eyes or mouths. They also visualize filters from the first convolutional layer and show that these learned filters are more diverse than those from training the same model in a supervised manner. \n\tThey train RotNet to learn features of CIFAR-10 and then train, in a supervised manner, additional layers that use RotNet feature maps to perform object classification. They achieve 91.16% accuracy, outperforming other unsupervised feature learning methods. They also show that in a semi-supervised setting where only a small number of images of each category is available at training time, their method outperforms a supervised method.\n\tThey next train RotNet on ImageNet and use the learned features for image classification on ImageNet and PASCAL VOC 2007 as well as object detection on PASCAL VOC 2007. They achieve an ImageNet and PASCAL classification score as well as an object detection score higher than other baseline methods.\n This task requires the ability to understand the types, the locations, and the poses of the objects presented in images and therefore provides a powerful surrogate supervision signal for representation learning. To demonstrate the effectiveness of the proposed method, the authors evaluate it under a variety of tasks with different settings. \n \n \n\n**Paper Strengths**\n- The motivation of this work is well-written.\n- The proposed self-supervised task is simple and intuitive. This simple idea of using image rotation to learn features, easy to implement image rotations without any artifacts\n- Requiring no scale and aspect ratio image transformations, the proposed self-supervised task does not introduce any low-level visual artifacts that will lead the CNN to learn trivial features with no practical value for the visual perception tasks.\n- Training the proposed model requires the same computational cost as supervised learning which is much faster than training image reconstruction based representation learning frameworks.\n- The experiments show that this representation learning task can improve the performance when only a small amount of annotated examples is available (the semi-supervised settings).\n- The implementation details are included, including the way of implementing image rotations, different network architectures evaluated on different datasets, optimizers, learning rates with weight decayed, batch sizes, numbers of training epochs, etc. \n- Outperforms all baselines and achieves performance close to, but still below, fully supervised methods\n- Plots rotation prediction accuracy and object recognition accuracy over time and shows that they are correlated\n\n\n\n**Paper Weaknesses**\n- The proposed method considers a set of different geometric transformations as discrete and independent classes and formulates the task as a classification task. However, the inherent relationships among geometric transformations are ignored. For example, rotating an image 90 degrees and rotating an image 180 degrees should be closer compared to rotating an image 90 degrees and rotating an image 270 degrees.\n- The evaluation of low-level perception vision task is missing. In particular, evaluating the learned representations on the task of image semantic segmentation is essential in my opinion. Since we are interested in assigning the label of an object class to each pixel in the image for the task, the ability to encode semantic image feature by learning from performing the self-supervised task can be demonstrated.\n- The figure presenting the visualization of the first layer filters is not clear to understand nor representative of any finding.\n- ImageNet Top-1 classification results produced by Split-Brain (Zhang et al., 2016b) and Counting (Noroozi et al., 2017) are missing which are shown to be effective in the paper [Representation Learning by Learning to Count](https://arxiv.org/abs/1708.06734).\n- An in-depth analysis of the correlation between the rotation prediction accuracy and the object recognition accuracy is missing. Showing both the accuracies are improved over time is not informative.\n- Not fully convinced on the intuition, some objects may not have a clear direction of what should be “up” or “down” (symmetric objects like balls), in Figure 2, rotated image X^3 could plausibly be believed as 0 rotation as well, do the failure cases of rotation relate to misclassified images?\n- “remarkably good performance”, “extremely good performance” – vague language choices (abstract, conclusion)\n- Per class breakdown on CIFAR 10 and/or PASCAL would help understand what exactly is being learned\n- In Figure 3, it would be better to show attention maps on rotated images as well as activations from other unsupervised learning methods. With this figure, it is hard to tell whether the proposed model effectively focuses on high level objects.\n- In Figure 4, patterns of the convolutional filters are not clear. It would be better to make the figures clear by using grayscale images and adjusting contrast.\n- In Equation 2, the objective should be maximizing the sum of losses or minimizing the negative. Also, in Equation 3, the summation should be computed over y = 1 ~ K, not i = 1 ~ N.\n\n\n\n**Preliminary Evaluation**\nThis paper proposes a self-supervised task which allows a CNN to learn meaningful visual representations without requiring supervision signal. In particular, it proposes to train a CNN to recognize the rotation applied to an image, which requires the understanding the types, the locations, and the poses of the objects presented in images. The experiments demonstrate that the learned representations are meaningful and transferable to other vision tasks including object recognition and object detection. Strong quantitative results outperforming unsupervised representation learning methods, but lacking qualitative results to confirm/interpret the effectiveness of the proposed method.", "I thank the authors for their response. The revised version of the paper includes several new experiments that address most of my questions. Specifically, I appreciate the following new analyses:\n- performance vs number of rotations (Table 2);\n- accuracy per class (Table 9);\n- effect of fine-tuning (Table 3);\n- generalization obtained by unsupervised learning on a dataset different from that used for subsequent supervised training (Table 6).\n\nOne of my questions was about how much of the good performance of the method was due to learning the unsupervised features on the same training set used by the supervised learning. The newly-added Table 6 partly addresses this question. However, I would suggest to add to this table a baseline corresponding to training the RotNet unsupervised features on Places in order to have a direct comparison with the same features trained on ImageNet (last row).\n\nI am somewhat disappointed in the responses provided by the authors about two of the criticisms that I had raised: 1) doing unsupervised training on a training set involving only classes in the test set is a contrived setup, 2) lack of finetuning results. The authors respond to the former point by saying that my statement is inaccurate and to the latter by stating that they disagree with my point. I find both of these answers harsh and unnecessary. In my review I preambled both of my points by saying \"... *nearly* all of the results ....\". In fact, I did recognize in my review that Table 4 provides an exception to both of these points. My criticism was aimed at convincing the authors to run additional experiments around these two aspects. The revision contains new experiments that in my view significantly strengthen this work. Pointing to these new results would have sufficed....", "We thank the reviewer for the valuable feedback. This is the 1st part of the answer to his/her comments.\n\nComment:\n\"Nearly all of the image categorization results (with the exception of those in Table 4) are presented for the contrived scenario where the unsupervised representation is learned from the same training set as the one used for the final supervised training of the categorization model. This is a useless application scenario. If labels for the training examples are available, why not using them for feature learning given that this leads to improved performance (see results in Tables)?\"\n\nAnswer:\nWe disagree with the above statement that we do not present enough evaluation results for cases where a different training set is used between the unsupervised and supervised tasks.\n\nIn Table 4 (which is Table 7 in the revised version of the paper) we evaluate the unsupervised learned features on THREE DIFFERENT PASCAL tasks: image classification, object detection, and object segmentation (the segmentation results were added in the revised version of the paper). These correspond to core problems in computer vision and evaluating the transferability of learned ConvNet features on those tasks is one of the most widely used and well-established benchmark for unsupervised representation learning methods [1,2,3,4].\n\nMoreover, in Figure 5.b we evaluate our unsupervised representation learning method on a semi-supervised setting when only a small part of the available training data are labelled and we demonstrate that our method can leverage the unlabelled training data to improve its accuracy on the test set.\n\nFurthermore, regarding the evaluation experiments that utilize the same training set for both the unsupervised and supervised learning (e.g. CIFAR-10 and ImageNet classification tasks), we note that this type of experiments have been proposed and are extensively used in all prior feature learning methods [1,2,3,4]. Therefore, they provide a well-established benchmark based on which we can compare to prior approaches. The reason why this is considered to be a useful benchmark is because it allows one to evaluate the quality of the unsupervised learned features by directly comparing them with the features learned in supervised way on the same training set (which provides an upper bound on the performance of the unsupervided features).\n\n------\n\nComment:\n\"More importantly, this setup does not allow us to understand how general the unsupervised features are. Maybe they are effective precisely because they have been learned from images of the 10 classes that the final classifier needs to distinguish. I would have liked to see some results involving unsupervised learning from a dataset that may contain classes different from those of the final test classification or, even better, from a dataset of randomly selected images that lack categorical coherence (e.g., photos randomly picked from the Web, such as Flickr pics).\"\n\nAnswer:\nIn general we believe that the primary goal of unsupervised representation learning is to learn image representations appropriate for understanding (e.g., recognizing or detecting) the visual concepts that were \"seen\" during training. Learning features that generalize on \"unseen\" visual concepts is indeed a desirable property but it is something that even supervised representation learning methods might struggle with it and is not the (main) scope of our paper. Nevertheless, as requested by the reviewer, we added in the revised version of our paper an evaluation of our unsupervised learned features on the scene classification task of Places205 benchmark (see Table 6). Note that for the scene classification results, the unsupervised features were learned on ImageNet that contains classes different from those of the scene classification task of Places205. \n\n------\n\n[1] Richard Zhang et al, Colorful Image Colorization. \n[2] Jeff Donahue et al, Adversarial Feature Learning.\n[3] Noroozi and Favaro, Unsupervised learning of visual representations by solving jigsaw puzzles.\n[4] Piotr Bojanowski and Armand Joulin, Unsupervised Learning by Predicting Noise.", "This is the 2nd part of the answer to the reviewer's comments.\n\nComment:\n\"In nearly all the experiments, the classifier is built on top of the frozen unsupervised features. This is in contrast with the common practice of finetuning the entire pretrained unsupervised net on the supervised task. It'd be good to know why the authors opted for the different setup and to see in any case some supervised finetuning results.\"\n\nAnswer:\nWe believe that this comment is inaccurate. First of all, for the PASCAL results in Table 4 (i.e., PASCAL classification, PASCAL detection, and the newly added PASCAL segmentation results), we FINETUNE THE ENTIRE NETWORK (see the last 3 columns of this table). Furthermore, in general for the experimental evaluation of our method on natural images (section 3.2), we want to emphasize that we follow THE SAME EVALUATION SETUP that prior unsupervised feature learning methods have used [1,2,3,4] and we did not propose a new one (this also allows us to compare with these approaches).\n\nRegarding the experiments presented in section 3.1 (that includes results in CIFAR-10), those were meant as a proof of concept of our work and to help better analyze various aspects of our approach before we move onto the more challenging, but also much more time consuming, experiments on ImageNet (see section 3.2). Therefore, for the experimental evaluation in CIFAR-10 we mimicked the evaluation setup that is employed by prior approaches on ImageNet (i.e., unsupervised feature learning on ImageNet and then training non-linear classifiers on top of them for the ImageNet classification task). Other than that we did not have any particular reason for not fine-tuning the learned features. However, as requested by the reviewer, we added experiments with fine-tuning in the revised version of the paper (see Table 3). We observe that by fine-tuning the unsupervised learned features this further improves the classification performance, thus reducing even more the gap with the supervised case.\n\n------\n\nComment:\n\"It would be useful to see the accuracy per class both when using unsupervised features as well as fully-supervised features. There are many objects that have a canonical pose/rotation in the world. Forcing the unsupervised features to distinguish rotations of such objects may affect the recognition accuracy for these classes. Thus, my request for seeing how the unsupervised learning affects class-specific accuracy.\"\n\nAnswer:\nWe added such results in Tables 8 and 9 (in appendix B) of the revised version of the paper.\n\n------\n\nComment:\n\"While the results in Table 2 are impressive, it appears that the different unsupervised learning methods reported in this table are based on different architectures. This raises the question of whether performance gains are due to the better mechanism for unsupervised learning or rather the better network architecture.\"\n\nAnswer:\nIndeed, each entry in Table 2 (Table 3 in the revised manuscript) has a different network architecture. It was not really possible for us to implement our method with each of those architectures and so those results are just indicative and not meant for direct comparison (we added this clarification in the revised manuscript - see caption of Table 3). The main bulk of experiments that directly compares our approach against other (more recent and more relevant) approaches is presented in section 3.2 of our paper. Regarding Table 2, we believe the most interesting and remarkable finding is the very small performance gap between our unsupervised feature learning method and the supervised case (that both use exactly the same network architecture).\n\n------\n\nComment:\n\"I do understand that using only 0, 90, 180 and 270 degree rotations eliminates the issue of potentially recognizable artifacts. Nevertheless, it'd be interesting to see what happens empirically when the number of discrete rotations is increased, e.g., by including 45, 135, 225 and 315 degree rotations. And what happens if you use only 0 and 180? Or only 90 and 270?\"\n\nAnswer:\nWe thank the reviewer for this suggestion. Please see Table 2 and relevant discussion in paragraph ``\"Exploring the quality of the learned features w.r.t. the number of recognized rotations\" (section 3.1) in the revised version of the paper.\n\n------\n\n[1] Richard Zhang et al, Colorful Image Colorization. \n[2] Jeff Donahue et al, Adversarial Feature Learning.\n[3] Noroozi and Favaro, Unsupervised learning of visual representations by solving jigsaw puzzles.\n[4] Piotr Bojanowski and Armand Joulin, Unsupervised Learning by Predicting Noise.", "This is the 3rd part of the answer to the reviewer's comments.\n\nComment:\n\"Not fully convinced on the intuition, some objects may not have a clear direction of what should be “up” or “down” (symmetric objects like balls), in Figure 2, rotated image X^3 could plausibly be believed as 0 rotation as well, do the failure cases of rotation relate to misclassified images?\"\n\nAnswer:\nRegarding the fact that some images might have ambiguous orientation, we note that this type of training examples comprise only a small part of the dataset and can essentially be seen as a small amount of label noise, which thus poses no problem for learning. On the contrary, the great majority of the used images have an unambiguous orientation. Therefore, the ConvNet, by trying to solve the rotation prediction task, will eventually be forced to learn object-specific features. This is also evidenced by the very strong experimental performance of these features when applied on a variety of different tasks including those of object recognition, object detection, object segmentation, and scene classification tasks (section 3 of the paper).\n\nConcerning the question posed by the reviewer if there is any connection between failure cases for rotation prediction and misclassifications w.r.t. object recognition, we did the following test in order to explore if there is any such correlation: first, we define as y0 a binary variable that indicates if an image is misclassified in the object recognition task by a fully supervised model, as y1 a binary variable that indicates if an image is misclassified in the object recognition task by our unsupervised learned RotNet model (by training a non-linear classifier on top of the RotNet features), and as x a continuous variable that indicates the fraction of rotations (out of the 4 possible ones per image) that are misclassified by RotNet. The point biserial correlation coefficient (https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.pointbiserialr.html) between the y1 and x variables on CIFAR-10 is 0.1473 with p-value 1.286e-49 while between the y0 and x variables is 0.1799 with p-value=1.5404e-73. Therefore, it seems that there is little correlation between failing to classify the rotations of an image and failing to classify the object that it depicts. Moreover, this holds regardless if we use a fully supervised object classifier (0.1799 correlation) or if we use an object classifier based on features learnt on the rotation prediction task.\n\n------\n\nComment:\n\" “remarkably good performance”, “extremely good performance” – vague language choices (abstract, conclusion) \"\n\nAnswer:\nWe rephrased the corresponding text to make it even more clear that the above statements relate to the state-of-the-art experimental results achieved by our method, which surpass prior approaches by a significant margin. \n\n------\n\nComment:\n\"Per class breakdown on CIFAR 10 and/or PASCAL would help understand what exactly is being learned\"\n\nAnswer:\nWe added such results in Tables 8 and 9 (in appendix B) of the revised version of the paper.\n\n------\n\nComment:\n\"In Figure 3, it would be better to show attention maps on rotated images as well as activations from other unsupervised learning methods. With this figure, it is hard to tell whether the proposed model effectively focuses on high level objects.\"\n\nAnswer:\nIn the revised version of the paper in Figure 3 we added attention maps generated by a supervised model. By comparing them with those of our unsupervised model we observe that both of them focus on similar areas of the image in order to accomplish their task. Also, in Figure 6 (in appendix A), we added the attention maps of the rotated versions of the images. We observe that the attention maps of all the rotated images is roughly the same which means the attention maps are equivariant w.r.t. image rotations. This practically means that in order to accomplish the rotation prediction task the network focuses on the same object parts regardless of the image rotation.\n\n------\n\nComment:\n\"In Equation 2, the objective should be maximizing the sum of losses or minimizing the negative. Also, in Equation 3, the summation should be computed over y = 1 ~ K, not i = 1 ~ N.\"\n\nAnswer:\nWe thank the reviewer for identifying the above typos. We fixed them in the revised version of the paper (see equation 3). \n\n------", "This is the 2nd part of the answer to the reviewer's comments.\n\nComment:\n\"ImageNet Top-1 classification results produced by Split-Brain (Zhang et al., 2016b) and Counting (Noroozi et al., 2017) are missing which are shown to be effective in the paper.\"\n\nAnswer:\nThe ImageNet Top-1 classification results for the NON-LINEAR classifiers of the Split-Brain and the Counting methods are missing because those methods do not report those results in their papers. However, in Table 4 (it is Table 7 in the revised version of the paper) we compare against the Split-Brain and Counting methods on the PASCAL tasks (i.e., classification, detection, and segmentation) and our method demonstrates state-of-the-art results. Furthermore, in the revised version of our paper we added the ImageNet Top-1 classification results for LINEAR classifiers of our method as well as prior methods that have reported such results (including Split-Brain and Counting methods) and again our approach achieves state-of-the-art results (see Table 5). \n\n-------\n\nComment:\n\"An in-depth analysis of the correlation between the rotation prediction accuracy and the object recognition accuracy is missing. Showing both the accuracies are improved over time is not informative.\"\n\nAnswer:\nFirst we would like to clarify how we created the object recognition accuracy curve in Figure 5a and in general what Figure 5b demonstrates. In order to create the object recognition accuracy curve, in each training snapshot of RotNet (i.e., every 20 epochs), we pause its training procedure and we train from scratch (until convergence) a non-linear object classifier on top of the so far learned RotNet features (specifically the 2nd conv. block features). The object recognition accuracy curve depicts the accuracy of those non-linear object classifiers after the end of their training while the rotation prediction accuracy curve depicts the accuracy of the RotNet at those snapshots. Therefore, Figure 5a demonstrates the following fact: as the ability of the RotNet features for solving the rotation prediction task improves (i.e., as the rotation prediction accuracy increases), their ability to solve the object recognition task improves as well (i.e., the object recognition accuracy also increases).\n\nWe also did another experiment towards clarifying the possible existence of a relation between the two tasks but this time we explored their relation in the opposite direction, i.e., we used input features learnt on the object recognition task in order to see their effectiveness on training a small network for the rotation prediction task. Specifically, the rotation prediction network that we train on the CIFAR10 dataset has the same architecture as the 3rd (and last) conv. block of a NIN based ConvNet and this network is applied on top of the feature maps generated by the 2nd conv. block of a NIN based ConvNet trained on the object prediction task of CIFAR10. The rotation classification accuracy that this hybrid model achieves is 88.05, which is relatively close to the 93.0 classification accuracy achieved by a NIN based ConvNet trained solely on the rotation prediction task (and despite the fact that the first 2 conv. blocks of the hybrid model have been trained only with images of 0 degrees orientation). \n\nIf the reviewer would like to specify any additional concrete experiment (reasonably easy to implement) that could be used to further clarify the existence of such a relation between the two tasks, we would be happy to implement and test it.\n\n------\n\n\n", "We thank the reviewer for the valuable feedback. This is the 1st part of the answer to his/her comments.\n\nComment:\n\"The proposed method considers a set of different geometric transformations as discrete and independent classes and formulates the task as a classification task. However, the inherent relationships among geometric transformations are ignored. For example, rotating an image 90 degrees and rotating an image 180 degrees should be closer compared to rotating an image 90 degrees and rotating an image 270 degrees.\"\n\nAnswer:\nWe thank the reviewer for this suggestion on how to further improve the performance of our method (we will certainly try to explore if a modification of this type can be of any help). However, based on our intuition, we feel that the above modification to the rotation prediction task will most probably not have any positive effect with respect to the paper's goal of unsupervised representation learning simply because the rotation prediction task that we propose is used here just as a proxy for learning semantic representations. Moreover, it is debatable if the representations of two images that differ, e.g., by 90 degrees should always be closer than the representations of two images that differ by 180 degrees (if this is what the reviewer means). In any case, we would be glad to explore the above enhancement to our method proposed by the reviewer and report any positive findings in the final version of the paper.\n\nA first experiment that we did towards that direction is to modify the target distributions used in the cross entropy loss during the training of the rotation prediction task: more specifically, instead of using target distributions that place the entire probability mass on the ground truth rotation (as before), we used distributions that also allow some small probability mass to be placed on the rotations that differ only by 90 degrees from the ground truth rotation. \nHowever, this modification did not offer any performance improvement when the learned features were tested on the CIFAR-10 classification task. On the contrary, the classification accuracy was slightly reduced from 89.06 to 88.91.\n\n------ \n\nComment:\n\"The evaluation of low-level perception vision task is missing. In particular, evaluating the learned representations on the task of image semantic segmentation is essential in my opinion.\"\n\nAnswer:\nWe agree with the reviewer. Unfortunately we did not have this experiment ready before the submission deadline. However, we added now the segmentation results in the revised version of the paper (see Table 7); we observe that again our method demonstrates state-of-the-art performance among the unsupervised approaches.\n\n------\n\nComment:\n\"In Figure 4, patterns of the convolutional filters are not clear. It would be better to make the figures clear by using grayscale images and adjusting contrast.\"\n\nAnswer:\nIn the revised version of the paper, we tried to improve the clarity of Figure 4 by further increasing the contrast. \n\n------\n\nComment:\n\"The figure (Figure 4) presenting the visualization of the first layer filters is not clear to understand nor representative of any finding.\"\n\nAnswer:\nHowever, we disagree with the statement that the visualizations of the 1st layer filters is not representative of any finding. It is true that the visualization of the 1st layer filters does not (directly) reveal the nature of the higher level features that a network learns, which is also what we are interested to understand. However, it very clearly demonstrates the nature of the low-level features that a network learns, which is also of interest, and, in our case, it shows that these features are very similar to those that a supervised object classification network learns. Due to the above reason, this type of visualization has been extensively used both for supervised methods [1] and for unsupervised methods [2,3,4]. \n\nFurthermore, concerning the interpretation of the higher level features learned by our method, since it is difficult to provide a similar visualization as the one for the 1st layer filters, we choose to visualize instead the attention maps that those layers generate. \n\n------\n\n[1] Alex Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks. \n[2] Jeff Donahue et al, Adversarial Feature Learning.\n[3] Noroozi and Favaro, Unsupervised learning of visual representations by solving jigsaw puzzles.\n[4] Piotr Bojanowski and Armand Joulin, Unsupervised Learning by Predicting Noise.", "We thank the reviewer for the valuable feedback. Here we will answer to his/her comments.\n\nComment: \n\"This is a useful discovery, because generating the rotated images is trivial to implement by anyone. It is a special case of the approach by Agrawal et al 2015, with more efficiency.\"\n\n-----\n\nAnswer: \nWe would like to mention that our method is NOT a special case of the Egomotion method of Agrawal et al 2015. More specifically, the Egomotion method employs a ConvNet model with siamese like architecture that takes as input TWO CONSECUTIVE VIDEO FRAMES and is trained to predict (through regression) their camera transformation. Instead, in our approach, the ConvNet takes as input A SINGLE IMAGE to which we have applied a random geometric transformation (rotation) and is trained to recognize (through classification) this geometric transformation WITHOUT HAVING ACCESS TO THE INITIAL IMAGE. These are two fundamentally different approaches.\n\nTo make this difference more clear, consider, for instance, the case where we feed the siamese ConvNet model of the Egomotion method with a pair of images that include an image and a rotated copy of it. In this case, training the ConvNet to predict the difference in rotation would lead to learning some very trivial features (e.g., it would only need to compare the four corners of the image in order to solve the task). \n\nOn the contrary, since the ConvNet used in our approach takes as input a single image, it cannot recognize the type of geometric transformation applied between consecutive frames and used in the Egomotion method. Instead, our approach requires utilizing geometric transformations (e.g., the image rotations of 0, 90, 180, and 270 degrees) that transform the image in such a manner that is unambiguous to infer the applied transformation without having access to the initial image.\n\nWe believe that the above differences force our ConvNet model to learn different and, in our opinion, more high level features from the Egomotion method.\n\nComments:\n\"On the negative side, this line of work would benefit from demonstrating concrete benefits. The performance obtained by pre-training with rotations is still inferior to performance obtained by pre-training with ImageNet, and we do have ImageNet so there is no reason not to use it. It would be important to come up with tasks for which there is not one ImageNet, then techniques such as that proposed in the paper would be necessary. However rotations are somewhat specific to images. There may be opportunities with some type of medical data.\"\n\n\"Additionally, the scope of the paper is a little bit restricted, there is not that much to take home besides the the following information: predicting rotations seems to require a lot of object category recognition.\"\n\n-----\n\nAnswer:\nTo be honest, we found the above criticism of our method unfair. It is true that one would ultimately like the performance of an unsupervised learning approach to surpass the performance of a fully supervised one, but, to the best of our knowledge, no published unsupervised learning method manages to achieve this goal so far.\n\nFurthermore, the type of data our work focuses on (i.e., visual data) have attracted a tremendous amount of research interest as far as unsupervised learning is concernced. In fact, just over the last few years, one can cite numerous other unsupervised learning works focusing on exactly the same type of data (e.g., [1,2,3,4,5]), all of which have been very well received by the community and which have also appeared on top-rank computer vision or machine learning conferences.\n\n[1] Learning to See by Moving\n[2] Unsupervised Visual Representation Learning by Context Prediction\n[3] Colorful Image Colorization. \n[4] Unsupervised learning of visual representations by solving jigsaw puzzles.\n[5] Representation Learning by Learning to Count" ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1v4N2l0-", "iclr_2018_S1v4N2l0-", "iclr_2018_S1v4N2l0-", "SyOCqz6fz", "HyCI-CKeG", "HyCI-CKeG", "BywXN7WMG", "BywXN7WMG", "BywXN7WMG", "HJ91THweG" ]
iclr_2018_rJGZq6g0-
Emergent Communication in a Multi-Modal, Multi-Step Referential Game
Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization.
accepted-poster-papers
An interesting paper, generally well-written. Though it would be nice to see that the methods and observations generalize to other datasets, it is probably too much to ask as datasets with required properties do not seem to exist. There is a clear consensus to accept the paper. + an interesting extension of previous work on emergent communications (e.g., referential games) + well written paper
test
[ "rJhUvu5gf", "BJ8ZFxKgM", "SkN953tgG", "S1RjXkq7z", "S1oFQJqQG", "rJuI7k9XG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The setup in the paper for learning representations is different to many other approaches in the area, using to agents that communicate over descriptions of objects using different modalities. The experimental setup is interesting in that it allows comparing approaches in learning an effective representation. The paper does mention the agents will be available, but leaves open wether the dataset will be also available. For reproducibility and comparisons, this availability would be essential. \n\nI like that the paper gives a bit of context, but presentation of results could be clearer, and I am missing some more explicit information on training and results (eg how long / how many training examples, how many testing, classification rates, etc).\nThe paper says is the training procedure is described in Appendix A, but as far as I see that contains the table of notations. \n\n\n", "--------------\nSummary and Evaluation:\n--------------\n\nThe paper presents a nice set of experiments on language emergence in a mutli-modal, multi-step setting. The multi-modal reference game provides an interesting setting for communication, with agents learning to map descriptions to images. The receiving agent's direct control over dialog length is also novel and allows for the interesting analysis presented in later sections. \n\nOverall I think this is an interesting and well-designed work; however, some details are missing that I think would make for a stronger submission (see weaknesses).\n\n\n--------------\nStrengths:\n--------------\n- Generally well-written with the Results and Analysis section appearing especially thought-out and nicely presented.\n\n- The proposed reference game provides a number of novel contributions -- giving the agents control over dialog length, providing both agents with the same vocabulary without constraints on how each uses it (implicit through pretraining or explicit in the structure/loss), and introducing an asymmetric multi-modal context for the dialog.\n\n- The analysis is extensive and well-grounded in the three key hypothesis presented at the beginning of Section 6.\n\n--------------\nWeaknesses:\n--------------\n\n- There is room to improve the clarity of Sections 3 and 4 and I encourage the authors to revisit these sections. Some specific suggestions that might help:\n\t\t- numbering all display style equations\n\t\t- when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation. \n\n- I did not see an argument in support of the accuracy@K metric. Why is putting the ground truth in the top 10% the appropriate metric in this setting? Is it to enable comparison between the in-domain, out-domain, and transfer settings?\n\n- Unless I missed something, the transfer test set results only comes up once in the context of attention methods and are not mentioned elsewhere. Why is this? It seems appropriate to include in Figure 5 if no where else in the analysis.\n\n- Do the authors have a sense for how sensitive these results are to different runs of the training process?\n\n- I did not understand this line from Section 5.1: \"and discarding any image with a category beyond the 398-th most frequent one, as classified by a pretrained ImageNet classifier'\"\n\n- It is not specified (or I missed it) whether the F1 scores from the separate classifier are from training or test set evaluations.\n\n- I would have liked to see analysis on the training process such as a plot of reward (or baseline adjusted reward) over training iterations. \n\n- I encourage authors to see the EMNLP 2017 paper \"Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog\" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols.\n\n- Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work.\n\n\n--------------\nCuriosities:\n--------------\n- I think the analysis is Figure 3 b,c is interesting and wonder if something similar can be computed over all examples. One option would be to plot accuracy@k for different utterance indexes -- essentially forcing the model to make a prediction after each round of dialog (or simply repeating its prediction if the model has chosen to stop). \n\n", "The paper proposes a new multi-modal, multi-step reference game, where the sender has access to visual data and the receiver has access to textual messages, and also the conversation can be terminated by the receiver when proper. \n\nLater, the paper describes their idea and extension in details and reports comprehensive experiment results of a number of hypotheses. The research questions seems straightforward, but it is good to see those experiments review some interesting points. One thing I am bit concerned is that the results are based on a single dataset. Do we have other datasets that can be used?\n\nThe authors also lay out further several research directions. Overall, I think this paper is easy to read and good. \n\n", "Thank you for your thoughtful comments.\n\n> The paper does mention the agents will be available, but leaves open whether the dataset will be also available.\n\nYou bring up a great point that in order to reproduce our results it would be necessary to have access to a similar dataset. In addition, even with written details of the implementation, it can be difficult to reproduce experiments. For these reasons, we’ve prepared to release the code and instructions on how to build the dataset, and will include a link in the de-anonymized version of the paper. We allude to this in section 5.2 under Code.\n\n> missing some more explicit information on training and results\n\nAnd \n\n> The paper says the training procedure is described in Appendix A\n\nWe also thank the reviewer for pointing out the typo in relation to Appendix A. In terms of training and results, a plot of the classification accuracy by epoch is shown in the updated Figure 6. We added the following details in sections 5.1 and 5.2 that should clear up confusion about the training procedure:\n\nThe number of images per class in the out-of-domain test set is 100 images per class (for 10 classes in total).\nWe use early stopping with a maximum 500 training epochs.\nWe train on a single GPU (Nvidia Titan X Pascal), and a single experiment takes roughly 8 hours for 500 epochs.\n\nIs the addition of these details sufficient? \n", "We’d like to thank the reviewer for their thoughtful feedback. In response to the following comment:\n\n> One thing I am bit concerned is that the results are based on a single dataset.\n\nA distinguishing property of our dataset is that, in addition to images, each class has an informative textual description, and there is a natural hierarchy of properties shared between classes. As there wasn’t a similar dataset already available, we had to collect the data ourselves. In section 5.2 of the de-anonymized version of the paper, we’ll include a link to our codebase which contains instructions to build such a dataset.\n", "We would like to thank the reviewer for their thoughtful compliments and criticism. In particular, the detailed list of areas for improvement have lead us to run additional experiments and make edits in the text that we believe have strengthened our work.\n\nLet us address your concerns and questions below.\n\n> analysis on the training process\n\nWe’ve updated Figure 6 in the paper to display Accuracy@1 in addition to Accuracy@6. We hope this metric plotted over each epoch gives a useful overview of the training process and some insight in how the model’s performance changes over time. \n\n> Do the authors have a sense for how sensitive these results are to different runs of the training process?\n\nWe ran six experiments with different random seeds and reported the mean and variance on their loss and accuracy in Appendix B, but would be open to include these values in the main text if this seems useful.\n\n> the transfer test set\n\nThere was not much to be gleaned from the transfer set besides the effect of the attention mechanism. We’re more explicit about saying so in Section 6.\n\n> the accuracy@K metric\n\nWe use this metric since many mammal classes are quite similar to each other, and we don't want to overpenalize predicting similar classes such as kangaroo and wallaby. As suggested by the reviewer, this metric also enables comparison between the in-domain, out-domain, and transfer test sets.\n\n> the F1 scores from the separate classifier are from training or test set evaluations\n\nThe plot in Figure 2a and its associated F1 scores are derived from the in-domain test set. \n\n> discarding any image with a category beyond the 398-th most frequent one\n\nWhen we build our dataset, we discard images that are not likely to be an animal, as determined by a pre-trained classifier.\n\n> numbering all display style equations\n\nWe appreciate the reviewer’s suggestion to add equation numbers, but believe that since we have so many equations, it is alright to only number the equations that we reference explicitly in the text. \n\n> when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation\n\nThe first message of the receiver is learned as a separate parameter in all cases and we’ve mentioned this in the “Recurrent Receiver” portion of Section 3.\n\n> the analysis is Figure 3 b,c\n\nFor Figure 3b and 3c, we show only the top-4 predicted classes because the probabilities given to the other classes are negligible in comparison. The observation that we made regarding this figure (that as the conversation progresses, similar but incorrect categories receive smaller probabilities than the correct one) held for all other categories, but we limited to these two classes as we felt this sufficiently conveyed the idea.\n\n> I encourage authors to see the EMNLP 2017 paper \"Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog\" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols.\n\nAnd\n\n> Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work.\n\nThere are some matters that we will leave for future work. Kottur et al. explain how limiting memory can force consistency over different steps in a dialog. This can be a useful property, but our work was primarily concerned with the distribution over messages and the model’s prediction confidence. It’s a natural progression to investigate the meaning of these messages as a follow-up work, and to attempt models that encode meaning not only in individual words, but also the latent structure in sequences of words.\n" ]
[ 7, 7, 7, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rJGZq6g0-", "iclr_2018_rJGZq6g0-", "iclr_2018_rJGZq6g0-", "rJhUvu5gf", "SkN953tgG", "BJ8ZFxKgM" ]
iclr_2018_rytstxWAW
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. Such a model, however, is transductive in nature because parameters are learned through convolutions with both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.
accepted-poster-papers
Graph neural networks (incl. GCNs) have been shown effective on a large range of tasks. However, it has been so far hard (i.e. computationally expensive or requiring the use of heuristics) to apply them to large graphs. This paper aims to address this problem and the solution is clean and elegant. The reviewers generally find it well written and interesting. There were some concerns about the comparison to GraphSAGE (an alternative approach), but these have been addressed in a subsequent revision. + an important problem + a simple approach + convincing results + clear and well written
train
[ "SJce_4YlM", "r1n9o5jEM", "BkgVo9s4z", "SkaxvK_VM", "B1ymVPEgM", "HJDVPNYgf", "H1IdT6AlG", "B1QpuuTmz", "ryk5t8Tmf", "Sy312STmM", "Hyq35OnXG", "SJL8DwOff", "SyMp44dMM", "Byp_N4_ff", "HyiEV4dGM", "rkOyV7wGf", "SyhcQQvfM", "r1oFX1wGf", "rkv4i8Szf", "HJo69LBfz", "B1bf9LBMG", "ry3mYUSMM", "SJ6IdUrMG", "rk0cyiubM", "By1zUb00W", "r1grwep0Z" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "public", "author", "author", "author", "public", "public", "public", "author", "author", "author", "author", "author", "public", "author", "public" ]
[ "Update:\n\nI have read the rebuttal and the revised manuscript. Additionally I had a brief discussion with the authors regarding some aspects of their probabilistic framework. I think that batch training of GCN is an important problem and authors have proposed an interesting solution to this problem. I appreciated all the work authors put into the revision. In this regard, I have updated my rating. However, I am not satisfied with how the probabilistic problem formulation was presented in the paper. I would appreciate if authors were more upfront about the challenges of the problem they formulated and limitations of their results. I briefly summarize the key missing points below, although I acknowledge that solution to such questions is out of scope of this work.\n\n1. Sampling of graph nodes from P is not iid. Every subsequent node can not be equal to any of the previous nodes. Hence, the distribution changes and subsequent nodes are dependent on previous ones. However, exchangeability could be a reasonable assumption to make as order (in the joint distribution) does not matter for simple choices of P. Example: let V be {1,2,3} and P a uniform distribution. First node can be any of the {1,2,3}, second node given first (suppose first node is '2') is restricted to {1,3}. There is clearly a dependency and change of distribution.\n\n2. Theorem 1 is proven under the assumption that it is possible to sample from P and utilize Monte Carlo type argument. However, in practice, sampling is done from a uniform distribution over observed samples. Also, authors suggest that V may be infinite. Recall that for Monte Carlo type approaches to work, sampling distribution is ought to contain support of the true distribution. Observed samples (even as sample size goes to infinity) will never be able to cover an infinite V. Hence, Theorem 1 will never be applicable (for the purposes of evaluating population loss). Also note that this is different from a more classical case of continuous distributions, where sampling from a Gaussian, for instance, will cover any domain of true distribution. In the probabilistic framework defined by the authors it is impossible to cover domain of P, unless whole V is observed.\n\n----------------------------------------------------------------------\nThis work addresses a major shortcoming of recently popularized GCN. That is, when the data is equipped with the graph structure, classic SGD based methods are not straightforward to apply. Hence it is not clear how to deal with large datasets (e.g., Reddit). Proposed approach uses an adjacency based importance sampling distribution to select only a subset of nodes on each GCN layer. Resulting loss estimate is shown to be consistent and its gradient is used to perform the weight updates.\n\nProposed approach is interesting and the direction of the work is important given recent popularity of the GCN. Nonetheless I have two major question and would be happy to revisit my score if at least one is addressed.\n\nTheory:\nSGD requires an unbiased estimate of the gradient to converge to the global optima in the convex loss case. Here, the loss estimate is shown to be consistent, but not guaranteed to be unbiased and nothing is said about the gradient in Algorithm 1. Could you please provide some intuition about the gradient estimate? I might not be familiar with some relevant results, but it appears to me that Algorithm 1 will not converge to the same solution as full data GD would.\n\nPractice:\nPer batch timings in Fig. 3 are not enough to argue that the method is faster as it might have poor convergence properties overall. Could you please show the train/test accuracies against training time for all compared methods?\n\nSome other concerns and questions:\n- It is not quite cleat what P is. You defined it as distribution over vertices of some (potentially infinite) population graph. Later on, sampling from P becomes equivalent to uniform sampling over the observed nodes. I don't see how you can define P over anything outside of the training nodes (without defining loss on the unobserved data), as then you would be sampling from a distribution with 0 mass on the parts of the support of P, and this would break the Monte Carlo assumptions.\n- Weights disappeared in the majority of the analysis. Could you please make the representation more consistent.\n- a(v,u) in Eq. 2 and A(v,u) in Eq. 5 are not defined. Do they both correspond to entries of the (normalized) adjacency?", "- Changed the integral kernel notation \\hat{a}(u,v) to \\hat{A}(u,v)\n- Added training accuracy plots in Fig 4", "We appreciate your patience. For the minor comments, we have included the training accuracy plots in Fig 4 (appendix) and changed the lower case a to upper case.\n\nRegarding Eq 4, it is true that in practice the algorithm needs to take care of only the sample loss. We also agree that if a sample is considered fixed, then bootstrapping cannot go beyond the precision of whatever quantity the sample estimates. We do want to point out, on the other hand, that if the sample size is considered varying to infinity and likewise for the resample size, then the consistent property of the estimator Eq 4 can be established. An intuitive explanation is something like-- if Eq 4 converges to the sample loss with probability one and the sample loss converges to the population loss with probability one, then Eq 4 converges to the population loss with probability one. Of course for the rigorous argument we need to take two dimensional limits.\n", "I have read the rebuttal and the updated manuscript. I still have several questions left before I revise my review and rating.\n\n1. I agree with statement in Theorem 1, however I am under the impression that you imply that Eq. 4 is a consistent estimator of Eq. 3. This is not true and this is why I was asking about P. Moreover, this does not need to be true for Algorithm 1 to make sense. You want to obtain consistent estimate of the gradient, which is based on the sample loss, not a population loss. And I agree that Eq. 4 is consistent estimate of the sample loss based on your theory, however all preceding to Eq. 4 discussion (Eq. 3 and Theorem 1 in particular) are for the population case and this makes the interpretation of Eq. 4 confusing. Could you please elaborate on what Eq. 4 is estimating.\n\nMinor suggestion and comment:\n1. You added the plot for test accuracy convergence in the Appendix, which is helpful. Although convergence of an optimization algorithm is better evaluated on the loss function it is optimizing - could you please add train accuracy convergence as well.\n2. Please use either a(u,v) or A(u,v) for entries of the adjacency (Eq. 2 and Eq. 5)\n", "The paper presents a novel view of GCN that interprets graph convolutions as integral transforms of embedding functions. This addresses the issue of lack of sample independence in training and allows for the use of Monte Carlo methods. It further explores variance reduction to speed up training via importance sampling. The idea comes with theoretical support and experimental studies.\n\nSome questions are as follows:\n\n1) could you elaborate on n/t_l in (5) that accounts for the normalization difference between matrix form (1) and the integral form (2) ?\n\n2) In Prop.2., there seems no essential difference between the two parts, as e(v) also depends on how the u_j's are sampled.\n\n3) what loss g is used in experiments?", "The paper focuses on the recently graph convolutional network (GCN) framework.\nThey authors identify a couple of issues with GCN: the fact that both training and test data need to be present at training time, making it transductive in nature and the fact that the notion of ‘neighborhood’ grows as the signal propagates through the network. The latter implies that GCNs can have a large memory footprint, making them impractical in certain cases. \nThe authors propose an alternative formulation that interprets the signals as vertex embedding functions; it also interprets graph convolutions as integral transforms of said functions.\nStarting from mini-batches consisting purely of training data (during training) each layer performs Monte Carlo sampling on the vertices to approximate the embedding functions.\nThey show that this estimator is consistent and can be used for training the proposed architecture, FastGCN, via standard SGD. \nFinally, they analyze the estimator’s variance and propose an importance-sampling based estimator that has minimal layer-to-layer variance.\nThe experiments demonstrate that FastGCN is much faster than the alternatives, while suffering a small accuracy penalty.\n\nThis is a very good paper. The ideas are solid, the writing is excellent and the results convincing. I have a few comments and concerns listed below.\n\nComments:\n1. I agree with the anonymous commenter that the authors should provide detailed description of their experimental setup.\n2. The timing of GraphSAGE on Cora is bizarre. I’m even slightly suspicious that something might have been amiss in your setup. It is by far the smallest dataset. How do you explain GraphSAGE performing so much worse on Cora than on the bigger Pubmed and Reddit datasets? It is also on Cora that GraphSAGE seems to yield subpar accuracy, while it wins the other two datasets.\n3. As a concrete step towards grounding the proposed method on state of the art results, I would love to see at least one experiment with the same (original) data splits used in previous papers. I understand that semi-supervised learning is not the purpose of this paper, however matching previous results would dispel any concerns about setup/hyperparameter mismatch. \n4. Another thing missing is an exploration (or at least careful discussion) as to why FastGCN performs worse than the other methods in terms of accuracy and how much that relative penalty can be.\n\nMinor comments:\n5. Please add label axes to Figure 2; currently it is very hard to read. Also please label the y axis in Figure 3.\n6. The notation change in Section 3.1 was well intended, however I feel like it slowed me down significantly while reading the paper. I had already absorbed the original notation and had to go back and forth to translate to the new one. \n", "This paper addresses the memory bottleneck problem in graph neural networks and proposes a novel importance sampling scheme that is based on sampling vertices (instead of sampling local neighbors as in [1]). Experimental results demonstrate a significant speedup in per-batch training time compared to previous works while retaining similar classification accuracy on standard benchmark datasets.\n\nThe paper is well-written and proposes a simple, elegant, and well-motivated solution for the memory bottleneck issue in graph neural networks.\n\nI think that this paper mostly looks solid, but I am a bit worried about the following assumption: “Specifically, we interpret that graph vertices are iid samples of some probability distribution”. As graph vertices are inter-connected and inter-dependent across edges of the graph, this iid assumption might be too strong. A short comment on why the authors take this particular interpretation would be helpful.\n\nIn the abstract the authors write: “Such a model [GCN], however, is transductive in nature because parameters are learned through convolutions with both training and test data.” — as demonstrated in Hamilton et al. (2017) [1], this class of models admits inductive learning as well as transductive learning, so the above statement is not quite accurate.\n\nFurthermore, a comment on whether this scheme would be useful for alternative graph neural network architectures, such as the one in MoNet [2] or the generic formulation of the original graph neural net [3] (nicely summarized in Gilmer et al. (2017) [4]) would be insightful (and would make the paper even stronger).\n\nI am very happy to see that the authors provide the code together with the submission (using an anonymous GitHub repository). The authors mention that “The code of GraphSAGE is downloaded from the accompany [sic] website, whereas GCN is self implemented.“ - Looking at the code it looks to me, however, as if it was based on the implementation by the authors of [5]. \n\nThe experimental comparison in terms of per-batch training time looks very impressive, yet it would be good to also include a comparison in terms of total training time per model (e.g. in the appendix). I quickly checked the provided implementation for FastGCN on Pubmed and compared it against the GCN implementation from [5], and it looks like the original GCN model is roughly 30% faster on my laptop (no batched training). This is not very surprising, as a fair comparison should involve batched training for both approaches. Nonetheless it would be good to include these results in the paper to avoid confusion.\n\nMinor issues:\n- The notation of the limit in Theorem 1 is a bit unclear. I assume the limit is taken to infinity with respect to the number of samples.\n- There are a number of typos throughout the paper (like “oppose to” instead of “opposed to”), these should be fixed in the revision.\n- It would be better to summarize Figure 3 (left) in a table, as the smaller values are difficult to read off the chart.\n\nOverall, I think that this paper can be accepted. The proposed scheme is a simple drop-in replacement for the way adjacency matrices are prepared in current implementations of graph neural nets and it promises to solve the memory issue of previous works while being substantially faster than the model in [1]. I expect the proposed approach to be useful for most graph neural network models.\n\nUPDATE: I would like to thank the authors for their detailed response and for adding additional experimental evaluation. My initial concerns have been addressed and I can fully recommend acceptance of this paper.\n\n[1] W.L. Hamilton, R. Ying, J. Leskovec, Inductive Representation Learning on Large Graphs, NIPS 2017\n[2] F. Monti, D. Boscaini, J. Masci, E. Rodala, J. Svoboda, M.M. Bronstein, Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017\n[3] F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, G. Monfardini, The Graph Neural Network Model, IEEE Transactions on Neural Networks, 2009\n[4] J. Gilmer, S.S. Schoenholz, P.F. Riley, O. Vinyals, G.E. Dahl, Neural Message Passing for Quantum Chemistry, ICML 2017\n[5] T.N. Kipf, M. Welling, Semi-Supervised Classification with Graph Convolutional Networks, ICLR 2017", "It could be either that we miss your point of “unlabeled data” or you misunderstood our replies. May we solicit a few possible meanings of “unlabeled data” in order that the discussion be more fruitful? Assume that training set and test set are disjoint.\n\n1. “Unlabeled data” means vertices outside the training and test sets. We do need to assume that all vertices in the (possibly infinite) population graph have labels. The distribution P is defined on all vertices.\n\n2. “Unlabeled data” means the test set. Because learning has nothing to do with the test set, the empirical risk should not contain unlabeled data. The training set is obtained from iid sampling and nothing is said about the test set (standard learning theory). Bootstrapping should be done on the training set only. This is the point we made in the past reply.\n\n3. “Unlabeled data” means that some vertices in the training set are unlabeled. Since we are not doing transductive learning, such a situation is precluded.\n", "That is why I initially mentioned that you need to somehow define the loss on the unlabeled examples. There is a single set of nodes that was generated from P (which is training and test data together), however some of the nodes are unlabeled. When you are doing the bootstrapping, the unlabeled nodes should also be considered during the sub-sampling (and have a non-zero probability to be selected). ", "The confusion may be cleared by considering the difference between transductive learning and inductive learning. The former is the setting on which the original GCN is based, whereas our work extends to the latter. In the transductive setting, the training set and the test set, often disjoint, are used for learning. However, in the inductive setting, only the training set is used. Hence, the sampling that yields the training set, as well as the bootstrapping from the training set, has nothing to do with the test set. The iid assumption poses no contradiction, to our view.\n", "Sampling of train and test vertices does not appear iid to me. Collection of graph vertices in the test set is guaranteed to be disjoint from the train set, which introduces the dependency. Bootstrapping, on the other hand, requires independence to produce meaningful estimates and I don not see it being applicable when support of train and test data is predefined to be disjoint. I would appreciate a more rigorous explanation.", "Thank you for the quick updates! I really appreciate your responsiveness. \n\nRegarding large, sparse graphs: Yes, it is tough to find public datasets on the billion-node scale. I hope to release some larger networks (~100 million nodes) later this year, but I don't expect to have the data ready anytime soon. It's also a great point that the powerlaw structure of most real-world networks is likely to help a lot, and I look forward to experimenting with FastGCN-style sampling in some big graphs :)\n\nCheers,\nWill ", "We would like to update that the authors of GraphSAGE offered an improved implementation of their codes for small graphs. Now the timing of GraphSAGE on Cora is more favorable. Please see the last paragraph of Section 4 and also Table 3.", "OK we may have misunderstood your focus of the distinction between the two settings. Overall we would prefer to think of a graph with two (or more) parts, although your view of having two separate graphs and view them in a unified angle is also ok. Yes, the two parts (or the two graphs in your terms) may (or may not) be connected. It does not quite matter whether they are or not. One does not need to think along the lines of embedding propagation. In test time, the input features and the learned weights are the most important things for obtaining the final embedding. If the test part is disconnected from the training part, it merely means that the embeddings of the trained nodes do not affect those of the test nodes. The only connection between training and testing is the weights. The \\hat{A} contains both the training part and the test part.\n", "Thanks for clarifying the timing issue of GraphSAGE and providing a new implementation. We appreciate that! We have included the results of the new codes in the paper and made clear that GraphSAGE is not designed for small graphs (see toward the end of Section 4).\n\nWe do not have access to billion-node graphs that are very sparse and we would certainly love to try on them. Even though the graph is sparse, it might have a powerlaw structure that contributes dense connections crucial for downstream applications. It does not seem easy to expect what would happen in that case and we’ll keep an open eye on it.\n", "A couple other minor points:\n\n- On such small graphs, it does not make sense to sample at test/inference time. Again, the old GraphSAGE repo does not easily support this, but the code I linked above does. This is likely the cause of GraphSAGE-GCN doing slightly worse in terms of F1.\n\n- The idea of sampling vertices at each layer makes sense in small graphs or dense graphs. However, I think it might become problematic in massive, sparse graphs. For example, in industry applications with 1+ billion nodes (or sparse graphs with many disconnected components), I would think the odds of a node in a batch having a neighbor in the sampled set will get quite small, unless the sampled set grows proportionally large in size. (Please correct me if I am missing something here). \n\nAgain, I really appreciate the authors' hard work, and the quality of their presentation. \n\nCheers,\nWill ", "Full disclosure: I am a lead author of GraphSAGE. \n\nThis paper provides some interesting contributions, and it is well-written. However, I do want to raise some points regarding the timing comparison with GraphSAGE, which is quite unfair to GraphSAGE (for reasons I’ll elaborate on below). I’ve also provided an alternative implementation of GraphSAGE that behaves more sensibly on the tiny Cora graph (see below). The unfairness stems from two sources:\n\n1) GraphSAGE is designed for massive graphs (>100,000 nodes), and the public implementation assumes that node neighborhoods in a particular batch don’t overlap too much.\n\n2) The authors use the default sample-size hyperparameters for GraphSAGE and apply the public implementation on very small graphs that it was not designed for.\n\nI’ll focus on the Cora dataset, as this is where the issue is most extreme. The Cora dataset has 2708 nodes, and the authors use GraphSAGE sample-size hyperparameters of S_1=25 and S_2=10, meaning that 10 neighbors are sampled in “layer-1” and 25 in “layer-2”, resulting in 250 sampled neighbors per node. Combined with a batch size of 256, this means that each batch samples 64,000 nodes, which is 23X larger than the entire Cora graph. Moreover, the public implementation of GraphSAGE assumes that the graph is large—we actually designed GraphSAGE with an industry collaboration in mind, with 1+ billion nodes—and most importantly, GraphSAGE assumes that the sampled node neighborhoods do not overlap. As a consequence, the public GraphSAGE code does not take into account repeated neighbors that are sampled within a particular batch (i.e., it assumes the sampled neighborhoods of all nodes are disjoint). On Cora, this means that we are doing ~23X more computation than necessary in this setup, since we have essentially sampled the entire graph 23 times in each batch. \n\nAs a high-level point, I don’t recommend using GraphSAGE on such small graphs; it was intended for use in large-graph settings where subsampling is actually necessary. (The entire Cora dataset easily fits in main memory on a chromebook…). However, in cases where one would apply GraphSAGE to such small graphs, the right thing to do would be to modify the code so that it does not do repeat calculation when there is significant overlap between node neighborhoods; this is just an implementation detail, and does not fundamentally change the GraphSAGE algorithm. (But I will reiterate this is still just unnecessary overhead—why subsample neighborhoods when each batch is just going to contain the whole graph anyways?) \n\nOf course, I take responsibility for our public implementation not supporting such small graphs, and for not making it clear that GraphSAGE will essentially break when the input graph is as small as Cora. But for reference, I modified some of my private, experimental pytorch code to handle the Cora case: https://github.com/williamleif/graphsage-simple \n\nForgive the messiness of the code—it was extracted from a larger private repo and coded in a rush while I am traveling. Running this code on my Macbook Pro (2.9 GHz, Intel Core i5, 16Gb RAM), I get about ~0.05 seconds per batch and a validation F1 of around ~0.85 (using split sizes that are the same as FastGCN). These results are much more sensible and comparable to (Fast)GCN. It is likely still a bit slower than the batched GCN code due to the unnecessary overhead of sampling. It is also possible that there is an error in my code as well, since I am traveling and coded it in a rush, but the results look pretty sensible to me. (The timing difference on Pubmed is much less drastic and in the ballpark of the numbers in the paper.) \n\nTo summarize, I think this is an interesting paper and commend the authors on their work---I especially liked the detailed discussion of variance reduction—but I would appreciate if the authors would update their timing comparison with GraphSAGE. I would also appreciate a note that GraphSAGE is not designed for such small graphs and that neighborhood subsampling leads to unnecessary overhead when graphs are that small. Again, I take responsibility for not making this limitation of the public GraphSAGE implementation more clear. \n\nBest regards,\nWill ", "Thank you for the feedback. Highly appreciated. I have some follow up questions:\n\n1. As you said, the proposed approach fits my first setting, where we have a graph G1 for training and another graph G2 for testing. G1 and G2 are separate graphs -- there won't be any edge between G1 and G2. How comes \"they do return for inference\"?\n\n2. As stated in Sec 3.2, \"for inference, the embedding of a new vertex may be computed by using the full GCN architecture (1)\". It is not clear to me. What is \\hat{A} in Eq (1) for inference purpose? Shall I include both training and test vertices in \\hat{A}, or shall I just use the test vertices to construct \\hat{A}? What if there are edges between training graph G1 and test graph G2?\n\nThank you!\n", "Thank you very much for the questions. Please find our responses in the following. We hope that your confusions are now cleared.\n\n>>> could you elaborate on n/t_l in (5) that accounts for the normalization difference between matrix form (1) and the integral form (2) ?\n\nFor (2), a probability measure must integrate to unity. On other hand, for the matrix form (1), the matrix products will explode when the matrix size becomes larger and larger. What is lacking is a factor of n that normalizes (1).\n\nIn fact, such an issue could be more principledly explained in the context of importance sampling in the subsection that follows. Note the displayed formula in Algorithm 2. Without using importance sampling, the denominator q(u_j^{(l)}) is simply 1/n, hence simplified to Algorithm 1.\n\n>>> In Prop.2., there seems no essential difference between the two parts, as e(v) also depends on how the u_j's are sampled.\n\nIt is true that e(v) is an integral in the u space. What we meant on the other hand is that if we change the way the u_j’s are sampled, the variance of G will respectively change. The specific amount of change (compare Proposition 2, Theorem 3, and Proposition 4) happens to the second term, leaving the first term R untouched. Please see the derivation (proof) in the appendix.\n\n>>> what loss g is used in experiments?\n\nFollowing GCN and GraphSAGE, the loss is the cross entropy.\n", "Thank you very much for your positive comments. Please find our responses and summary of revisions in the following. Your reviews are cited with >>>.\n\n>>> I agree with the anonymous commenter that the authors should provide detailed description of their experimental setup.\n\nWe have inserted details regarding the train/val/test split concerned by the anonymous commenter, in the main text. Additional experiments were included in the appendix.\n\n>>> The timing of GraphSAGE on Cora is bizarre. I’m even slightly suspicious that something might have been amiss in your setup. It is by far the smallest dataset. How do you explain GraphSAGE performing so much worse on Cora than on the bigger Pubmed and Reddit datasets? It is also on Cora that GraphSAGE seems to yield subpar accuracy, while it wins the other two datasets.\n\nWe double checked the code and reran the experiments but did not spot abnormality. We encourage the reviewer to checkout our code from the anonymous github and verify. Here are our thoughts: For training time, GraphSAGE uses sampling so the time is independent of the graph size. The times across data sets should be comparable since sample sizes are comparable. Fluctuations are normal. For accuracy, we did another round of hyperparameter tuning and found that the F1 score on Cora can be improved. The newer results were updated to the table in Figure 3. However, these better results are still subpar compared with those of GCN and FastGCN.\n\n>>> As a concrete step towards grounding the proposed method on state of the art results, I would love to see at least one experiment with the same (original) data splits used in previous papers. I understand that semi-supervised learning is not the purpose of this paper, however matching previous results would dispel any concerns about setup/hyperparameter mismatch. \n\nWe have included an additional experiment in the appendix; see Section C.2. The results for GCN are consistent with those reported by Kipf and Welling. We have not seen reported results for GraphSAGE on these data sets; our results suggest way inferior performance. It is suspected that the model significantly overfits the data, because training accuracy is 1. For the proposed FastGCN, it also performs inferior to GCN, probably because of the very limited number of training labels. We fork a different version, called FastGCN-transductive, which uses both training and test data for learning (hence falling back to the transductive setting of GCN). The results of FastGCN-transductive match those of GCN.\n\n>>> Another thing missing is an exploration (or at least careful discussion) as to why FastGCN performs worse than the other methods in terms of accuracy and how much that relative penalty can be.\n\nWe would argue that the accuracy results of FastGCN are quite comparable with the best of other methods. The loss of accuracy is even smaller than the difference among the several aggregators proposed for GraphSAGE. The improvement in running time outweighs such a minimal loss.\n\n>>> Minor comments:\n>>> Please add label axes to Figure 2; currently it is very hard to read. Also please label the y axis in Figure 3.\n\nDone.\n\n>>> The notation change in Section 3.1 was well intended, however I feel like it slowed me down significantly while reading the paper. I had already absorbed the original notation and had to go back and forth to translate to the new one. \n\nIt is an unfortunate compromise, because the notations developed so far have become too cumbersome. If we carry the subscripts and superscripts to the rest of the paper, the digestion of the math is possibly even harder.\n", "We appreciate very much your critical comments. Please find our responses and summary of revisions in the following. Your reviews are cited with >>>. We hope that the edited version may clear the confusion and you enjoy the paper as other reviewers do :)\n\n>>> Theory:\n>>> SGD requires an unbiased estimate of the gradient to converge to the global optima in the convex loss case. Here, the loss estimate is shown to be consistent, but not guaranteed to be unbiased and nothing is said about the gradient in Algorithm 1. Could you please provide some intuition about the gradient estimate? I might not be familiar with some relevant results, but it appears to me that Algorithm 1 will not converge to the same solution as full data GD would.\n\nThe consistency of the gradient estimator simply follows that of the loss estimator, if the differential operator is continuous. Hence, the essential question is whether SGD converges if the gradient estimator is consistent but not unbiased. We have developed a convergence theory in the appendix (see Section D) for our algorithms. Generally speaking, the convergence rate is the same as the case of unbiased gradient estimator.\n\n>>> Practice:\n>>> Per batch timings in Fig. 3 are not enough to argue that the method is faster as it might have poor convergence properties overall. Could you please show the train/test accuracies against training time for all compared methods?\n\nWe found that the convergence speed between GCN and FastGCN was empirically similar, whereas GraphSAGE appears to converge much faster. Coupled with the per-epoch cost, overall FastGCN still wins with a substantial margin. We have inserted a section in the appendix to cover the total training time as well as the accuracy. Please see Section C.1 and particularly Table 3 and Figure 4.\n\n>>> Some other concerns and questions:\n>>> It is not quite clear what P is. You defined it as distribution over vertices of some (potentially infinite) population graph. Later on, sampling from P becomes equivalent to uniform sampling over the observed nodes. I don't see how you can define P over anything outside of the training nodes (without defining loss on the unobserved data), as then you would be sampling from a distribution with 0 mass on the parts of the support of P, and this would break the Monte Carlo assumptions.\n\nThis would be a very interesting excursion. In a sampling framework that we are settling with (all being traced back to what empirical risk minimization means for graphs), P is an abstract probability measure for the graph nodes. For the sake of simplicity imagine an infinite graph (just like the usual vectorial case where the input space is d-dimensional Euclidean). Some graph nodes are sampled for training and some others are used for validation and testing. P is the underlying (unknown) probability distribution that one uses for sampling.\n\nThe uniform sampling mentioned later is a separate story. Suppose that you already have a sample (i.e., the training set). Note that “a sample” here means a collection of data points drawn iid from a population. And you want to estimate some properties of the population (i.e., the expected loss). Bootstrapping is a scheme that subsamples the given sample for performing inference on the unknown population. This corresponds to using a mini-batch of the training set to estimate the expected loss. The most straightforward approach for bootstrapping is a uniform subsampling with or without replacement. Importance (sub)sampling as we use later may yield a better estimate.\n\n>>> Weights disappeared in the majority of the analysis. Could you please make the representation more consistent.\n\nWe reexamined the whole paper and included the weights as appropriate. Since they are linear, the overall theory and conclusions remain valid.\n\n>>> a(v,u) in Eq. 2 and A(v,u) in Eq. 5 are not defined. Do they both correspond to entries of the (normalized) adjacency?\n\nYes they do. Text was edited.\n", "Thank you very much for your encouraging comments. Please find our responses and summary of revisions in the following. Your reviews are cited with >>>.\n\n>>> I think that this paper mostly looks solid, but I am a bit worried about the following assumption: “Specifically, we interpret that graph vertices are iid samples of some probability distribution”. As graph vertices are inter-connected and inter-dependent across edges of the graph, this iid assumption might be too strong. A short comment on why the authors take this particular interpretation would be helpful.\n\nThe iid assumption was made to be conformant with the standard learning setting that minimizes the empirical risk of iid samples. The motivation was developed at the beginning of Section 3.\n\n>>> In the abstract the authors write: “Such a model [GCN], however, is transductive in nature because parameters are learned through convolutions with both training and test data.” — as demonstrated in Hamilton et al. (2017) [1], this class of models admits inductive learning as well as transductive learning, so the above statement is not quite accurate.\n\nYes, Hamilton et al. established an extension of GCN to the task of inductive unsupervised learning. For preciseness, we edited our statement. Now it reads: “This model, however, was originally designed to be learned with the presence of both training and test data.”\n\n>>> Furthermore, a comment on whether this scheme would be useful for alternative graph neural network architectures, such as the one in MoNet [2] or the generic formulation of the original graph neural net [3] (nicely summarized in Gilmer et al. (2017) [4]) would be insightful (and would make the paper even stronger).\n\nThank you very much for suggesting generalize our work to other architectures. Indeed, the simple yet powerful idea of sampling is often applicable to models that are based on first-order neighborhoods. We extended a paragraph in the concluding section to stress this point and also suggested an avenue of future work.\n\n>>> I am very happy to see that the authors provide the code together with the submission (using an anonymous GitHub repository). The authors mention that “The code of GraphSAGE is downloaded from the accompany [sic] website, whereas GCN is self implemented.“ - Looking at the code it looks to me, however, as if it was based on the implementation by the authors of [5]. \n\nYes, the codes of FastGCN are based on the implementation of [5]. We meant that we used the codes of GraphSAGE without change, but implemented our own algorithm and changed the GCN codes to adapt to our problem setting. We have modified the text to clarify the confusion.\n\n>>> The experimental comparison in terms of per-batch training time looks very impressive, yet it would be good to also include a comparison in terms of total training time per model (e.g. in the appendix). I quickly checked the provided implementation for FastGCN on Pubmed and compared it against the GCN implementation from [5], and it looks like the original GCN model is roughly 30% faster on my laptop (no batched training). This is not very surprising, as a fair comparison should involve batched training for both approaches. Nonetheless it would be good to include these results in the paper to avoid confusion.\n\nWe have included additional results regarding the total training time in the appendix. Please see Section C.1. Note that for faster convergence, the learning rate of FastGCN has been changed to 0.01 in our codes, so now it is faster than the original GCN model on Pubmed. The accuracy of FastGCN remains the same.\n\n>>> Minor issues:\n>>> The notation of the limit in Theorem 1 is a bit unclear. I assume the limit is taken to infinity with respect to the number of samples.\n\nYes. Corrected.\n\n>>> There are a number of typos throughout the paper (like “oppose to” instead of “opposed to”), these should be fixed in the revision.\n\nFixed.\n\n>>> It would be better to summarize Figure 3 (left) in a table, as the smaller values are difficult to read off the chart.\n\nWe have increased the font size to make the numbers legible. Also note that the vertical axis is modified to the log10 scale so that orders-of-magnitude improvement can be easily seen. We feel that a bar chart here may be more informative than a table.", "There are slightly different accounts of the distinction between inductive and transductive learning, but it should be well agreed that inductive learning builds a model from the knowledge of labeled data only, and transductive learning from both labeled and unlabeled data.\n\nThe transductive setting is highly related to the semi-supervised setting, where only a small portion of the data are known with labels, and hence one may as well incorporate the information of unlabeled data to build a more accurate model. For graphs, it is often the case that the unlabeled vertices happen to be the test data whose labels are awaiting for prediction. Of course, such an understanding is based on the assumption that the given graph is fixed and not evolving (at least no new vertices are added in).\n\nIn our work, we find that it would be easier to build a consistent theory and draw connections with risk minimization (which is the standard learning theory), if we think about the given graph as a piece of a larger, possibly infinite, graph. In this vein, observed vertices are given labels and there are unobserved ones whose labels we want to predict later on. In other words, the proposed work generalizes GCN to the supervised and inductive setting, where unlabeled vertices are not used for training.\n\nSo, to answer your question, what we are proposing indeed fits your first setting, because the edges between the labeled vertices and the unlabeled ones never enter training (but they do return for inference).\n", "Thank you for the nice work on graph convolutional networks. I am a bit confused on what exactly is \"inductive learning on graph data\".\n\nTo my limited view, the inductive setting is something like: We have a graph G1 for training and another graph G2 for testing. G1 and G2 are separate graphs -- there is no edge connecting the two graphs. We would train a model on G1, and then apply it to predict on G2.\n\nHowever, by reading the second last paragraph of page 1 and Sec 3.2, it seems like the inductive setting used in this work is different: The training graph G1 might connect to the test graph G2 via some edges. In this case, we would probably propagate the learned embedding on G1 nodes to the nodes in G2 through edges. Isn't it transductive?\n\nNevertheless, can we extend the proposed FastFCN to the first setting? Do we need to have some \"edge sampling\" strategy for mini-batch SGD, apart from the \"node sampling\" strategy proposed in the paper? Thank you!\n\n", "Thank you very much for the query of the details. A small summary of the train/val/test split is in the following:\n\nCora: 2708 nodes in total. Original split 140/500/1000 -> we use 1208/500/1000.\nPubmed: 19717 nodes in total. Original split 60/500/1000 -> we use 18217/500/1000.\n\nThat is, the validation size and test size are unchanged, but we use all the rest data for training, instead of using only a small portion. More specifically, we used the same graph structure and the same input features. Then, we kept the test index unchanged, and selected 500 nodes for validation. All the remaining nodes were used for training.\n\nGCN was originally proposed as a semi-supervised (transductive) method. Hence, only a small portion of the nodes have their labels used for training. Our work, on the other hand, leans toward the supervised (inductive) setting. The main purpose is to demonstrate the scalability and speed of our method. If the training set had only a small number of nodes, the original GCN already works very well and it is not necessary to use our method. Hence, we enlarge the training set by using all available nodes (excluding validation and testing). Moreover, such a split is more coherent with that of the other data set, Reddit, used in another compared work, GraphSAGE.\n\nBecause more labels are used for training, it makes sense that the prediction results are better than those in the previous works.\n\nWe will edit the paper when allowed to address this question.\n", "Exactly how did you change the train/val/test split of the data sets? The accuracy values of GCN reported for Cora and Pubmed are much higher than in all previous work. Why did you not use one of the standard evaluation set ups? (Either the Planetoid split or 10/20 randomly sampled splits)" ]
[ 6, -1, -1, -1, 7, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 2, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rytstxWAW", "iclr_2018_rytstxWAW", "SkaxvK_VM", "B1bf9LBMG", "iclr_2018_rytstxWAW", "iclr_2018_rytstxWAW", "iclr_2018_rytstxWAW", "ryk5t8Tmf", "Sy312STmM", "Hyq35OnXG", "B1bf9LBMG", "HyiEV4dGM", "HJDVPNYgf", "r1oFX1wGf", "rkOyV7wGf", "SyhcQQvfM", "iclr_2018_rytstxWAW", "SJ6IdUrMG", "B1ymVPEgM", "HJDVPNYgf", "SJce_4YlM", "H1IdT6AlG", "rk0cyiubM", "iclr_2018_rytstxWAW", "r1grwep0Z", "iclr_2018_rytstxWAW" ]
iclr_2018_H1vEXaxA-
Emergent Translation in Multi-Agent Communication
While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting.
accepted-poster-papers
The paper considers learning an NMT systems while pivoting through images. The task is formulated as a referential game. From the modeling and set-up perspective it is similar to previous work in the area of emergent communication / referential games, e.g., Lazaridou et al (ICLR 17) and especially to Havrylov & Titov (NIPS 17), as similar techniques are used to handle the variable-length channel (RNN encoders / decoders + the ST Gumbel-Softmax estimator). However, its multilingual version is interesting and the results are sufficiently convincing (e.g., comparison to Nakayama and Nishida, 17). The paper would more attractive for those interested in emergent communication than the NMT community, as the set-up (using pivoting through images) may be perceived somewhat exotic by the NMT community. Also, the model is not attention-based (unlike SoA in seq2seq / NMT), and it is not straightforward to incorporate attention (see R2 and author response). + an interesting framing of the weakly-supervised MT problem + well written + sufficiently convincing results - the set-up and framework (e.g., non-attention based) is questionable from practical perspective
val
[ "SyDjKqLEM", "HJnX0AFeG", "SyKh0W9eG", "Bk5lbEjxG", "r1iwyrpQf", "r19bjleQz", "B1z9cegXz", "S163sxgQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "My reply to authors' arguments about emergent translation:\n\nI agree that the agents learn to translate without seeing any parallel data. But you are bridging the languages through image which is the common modality. How is this different than bridge based representation learning or machine translation? The only novelty here is you add communication as an extra supervision to the bridge based MT. I am still against the usage of the word \"emergent\". You can motivate this work as communication for extra supervision in a bridge based MT.\n\nNevertheless, I have given 8/10 for this paper since this deserves to be accepted. I am happy with other responses for my review.", "Summary: \n\nThis paper proposes a multi-agent communication task where the agents learn to translate as a side-product to solving the communication task. Authors use the image modality as a bridge between two different languages and the agents learn to ground different languages to same image based on the similarity. This is achieved by learning to play the game in both directions. Authors show results in a word-level translation task and also a sentence-level translation task. They also show that having more languages help the agent to learn better.\n\nMy comments:\n\nThe paper is well-written and I really enjoyed reading this paper. While the idea of pivot based common representation learning for language pairs with no parallel data is not new, adding the communication aspect as an additional supervision is novel. However I would encourage authors to rephrase their claim of emergent translation (the title is misleading) as the authors pose this as a supervised problem and the setting has enough constraints to learn a common representation for both languages (bridged by the image) and hence there is no autonomous emergence of translation out of need. I see this work as adding communication to improve the translation learning.\n\nIs your equation 1 correct? I understand that your logits are reciprocal of mean squared error. But don’t you need a softmax before applying the NLL loss mentioned in equation 1? In current form of equation 1, I think you are not including the distractor images into account while computing the loss? Please clarify.\n\nWhat is the size of the vocabulary used in all the experiments? Because Gumbel Softmax doesn’t scale well to larger vocabulary sizes and it would be worth mentioning the size of your vocabulary in all the experiments.\n\nAre you willing to release the code for reproducing the results?\n\nMinor comments:\n\nIn appendix C, Table 4 caption: you say target sentence is “Trg” but it is “Ref” in the table. Also is the reference sentence for skateboard example typo-free?\n", "--------------\nSummary and Evaluation:\n--------------\nThis work present a novel multi-agent reference game designed to train monolingual agents to perform translation between their respective languages -- all without parallel corpora. The proposed approach closely mirrors that of Nakayama and Nishida, 2017 in that image-aligned text is encouraged to map to similarly to the grounded image. Unlike in this previous work, the approach proposed here induces this behavior though a multi-agent reference game. The key distinction being that in this gamified setting, the agents sample many more descriptions from their stochastic policies than would otherwise be covered by the human ground truth. The authors demonstrate that this change results in significantly improved BLEU scores across a number of translation tasks. Furthermore, increasing the number of agents/languages in this setting seems to \n\nOverall I think this is an interesting paper. The technical novelty is somewhat limited to a minor (but powerful) change in approach from Nakayama and Nishida, 2017; however, the resulting translators outperform this previous method. I have a few things listed in the weaknesses section that I found unclear or think would make for a stronger submission.\n\n\n--------------\nStrengths:\n--------------\n\n- The paper is fairly clearly written and the figures appropriately support the text.\n\n- Learning translation without parallel corpora is a useful task and leveraging a pragmatic reference game to induce additional semantically valid samples of a source language is an interesting approach to do so.\n\n- I'm also excited by the result that multi-agent populations tend to improve the rate of convergence and final translation abilities of these models; though I'm slightly confused about some of the results here (see weaknesses).\n\n--------------\nWeaknesses:\n--------------\n\n- Perhaps I'm missing something, but shouldn't the Single EN-DE/DE-EN results in Table 2 match the not pretrained EN-DE/DE-EN Multi30k Task 1 results? I understand that this is perhaps on a different data split into M1/2 but why is there such a drastic difference?\n\n- I would have liked to see some context as how these results compare to an approach trained with aligned corpora. Perhaps a model trained on the human-translated pairs from Task 1 of Multi30k? Obviously, outperforming such a model is not necessary for this approach to be interesting, but it would provide useful context on how well this is doing.\n\n- A great deal of the analysis and qualitative examples are pushed to the supplement which is a bit of a shame given they are quite interesting.\n\n\n", "Summary: The authors show that using visual modality as a pivot they can train a model to translate from L1 to L2. \n\nPlease find my detailed comments/questions/suggestions below:\n\n1) IMO, the paper could have been written much better. At the core, this is simply a model which uses images as a pivot for learning to translate between L1 and L2 by learning a common representation space for {L1, image} or {L2, image}. There are several works on such multimodal representation learning but the authors present their work in a way which makes it look very different from these works. IMO, this leads to unnecessary confusion and does more harm than good. For example, the abstract gives an impression that the authors have designed a game to collect data (and it took me a while to set this confusion aside).\n\n2) Continuing on the above point, this is essentially about learning a common multimodal representation and then decode from this common representation. However, the authors do not cite enough work on such multimodal representation learning (for example, look at Spandana et. al.: Image Pivoting for Learning Multilingual Multimodal Representations, EMNLP 2017 for a good set of references)\n\n3) This omission of related work also weakens the experimental section. At least for the word translation task many of these common representation learning frameworks could have been easily evaluated. For example, find the nearest german neighbour of the word \"dog\" in the common representation space. The authors instead compare with very simple baselines.\n\n4) Even when comparing with simple baselines, the proposed model does not convincingly outperform them. In particular, the P@5 and P@20 numbers are only slightly better. \n\n5) Some of the choices made in the Experimental setup seem questionable to me:\n - Why use a NMT model without attention? That is not standard and does not make sense to use when a better baseline model (with attention) is available ?\n - It is mentioned that \"While their model unit-normalizes the output of every encoder, we found this to consistently hurt performance, so do not use normalization for fair comparison with our models.\" I don't think this is a fair comparison. The authors can mention their results without normalization if that works well for them but it is not fair to drop normalization from the model of N&N if that gives better performance. Please mention the numbers with unit normalization to give a better picture. It does not make sense to weaken an existing baseline and then compare with it.\n\n6) It would be good to mention the results of the NMT model in Table 1 itself instead of mentioning them separately in a paragraph. This again leads to poor readability and it is hard to read and compare the corresponding numbers from Table 1. I am not sure why this cannot be accommodated in the Table itself.\n\n7) In Figure 2, what exactly do you mean by \"Results are averaged over 30 translation scenarios\". Can you please elaborate ?", "We have uploaded a new revision. The revision addresses the reviewer’s comments, with the following changes in particular:\n\n1) Added more references on multimodal / multilingual representation learning in Section 2.\n\n2) We explain (Section 5, “NMT with neighboring pairs”) why we do not compare against an NMT model with attention. The reason is that incorporating attention would mean that agents have access to each other's hidden states, which is no longer a multi-agent setting and outside of the scope of our work.\n\n3) We added an (even) stronger comparison against Nakayama and Nishida, also including their original loss function with normalization in Table 1.\n\n4) We added NMT results into Table 1 instead of having them separately as a paragraph.\n\n5) In addition, we fixed a bug in our beam search code and updated the BLEU scores in Table 1 and 2 accordingly. We achieve higher BLEU scores, and the results stay the same: our models consistently outperform all our baselines.", "- Performance difference in Table 1 and 2\n\nThe size of the model used is different between Table 1 and Table 2. D_hid and D_emb are (1024, 512) for the models in Table 1, and (256, 128) for the models in Table 2. Also, the community models were early stopped based on the overall performance across 6 different language pairs, instead of two (as was the case for single models), which could have also caused the difference in BLEU score.\n\n- Comparison with models trained on fully parallel corpora\n\nWe added the performance obtained by an NMT model trained on aligned corpora in Table 1.\n\n- Appendix\n\nWe agree. Due to page constraints we were forced to move many interesting analyses to the appendix.", "1) Our approach differs from previous works on multimodal representation learning and translation in two ways:\n\nIn the existing multimodal NMT setting, we are often given a set of images and their descriptions in both source and target languages, while our setting goes further by giving disjoint sets of image-text pairs to the agents.\n\nA key difference between our work and many previous works in multimodal representation learning and translation (including Nakayama and Nishida, 2017) is that our agents learn to translate from communicating with each other. This allows our agents to learn from a far more diverse set of image descriptions than otherwise available as ground truth captions. We show that adding the communication element leads to significantly improved BLEU scores across several translation tasks.\n\n2) Thanks for pointing out. We cited previous relevant work on multimodal/multilingual representation learning in our revision. \n\n3) Regarding the comment “these common representation learning frameworks could have been easily evaluated”, did you mean something like the Semantic Textual Similarity task? Using your example of translating the word “dog” to German, our model actually finds the nearest German word neighbour in the joint space. Given the particular dataset that we used (Bergsma500), nearest neighbour methods based on similarity in the ConvNet feature space were actually the only reasonable (and fairly strong) baselines we could think of. Do you have any other suggestions?\n\n4) Note that Bergsma500 is a very small dataset (500 categories X 20 images). Considering that we halve our dataset to train each agent, the training data is indeed extremely small, which could have caused limited performance improvement over our baselines. We have tried pre-training our models on ImageNet and fine-tuning it on Bergsma. This performed better than training on Bergsma from scratch, but we did not include this in our paper.\n\n5) -Q : Why was attentional NMT not used?\n\nOur model does not use attention, so we decided not to use attention in our baseline for fairness. Incorporating attention into our model is not trivial, as attention has to be performed over the image vectors from the image encoder. We leave this as future work.\n\n-Q : Why was N&N baseline with normalization not compared with?\n\nWe tested both versions of Nakayama’s model (with and without normalization). Not using normalization consistently outperformed using normalization. So we left out the numbers for the model with normalization to strengthen our baseline (not weaken it). Nevertheless, we will include the results for Nakayama’s with normalization in the revision.\n\n6) We added the NMT results into Table 1.\n\n7) We have 15 language pairs in Bergsma500, and we train our model to communicate and translate in both directions (e.g. EN->DE and DE->EN). We averaged the Precisions @ K (K=1, 5, 20) across all language pairs.", "- Emergent translation?\n\nBy “emergent translation”, we meant that translation emerges as a consequence of having two agents solve a referential game, without parallel corpora. The referential game involves images and languages, but the translation emerges between language and language - an emergent property of the combination of the objective function and the model weight tying (specifically that both languages used by the speaker use the same visual system/weights).\n\nIs this explanation satisfactory? Otherwise, do you have any suggestions?\n\n- Equation 1\n\nThanks for spotting this. Yes softmax was indeed used, so the distractor examples were penalized via partition function of the softmax. This is fixed in the revision.\n\n- Vocabulary sizes\n\nMulti30K Task 1 : (EN: 4035, DE : 5445)\nMulti30K Task 2 : (EN: 8618, DE : 13091)\nCOCO : (JP : 13019, EN : 10396)\n\nWe added this in the revision.\n\n- Open-sourcing the code\n\nYes, we plan to open-source the code shortly.\n\n- Typo in Table 4\n\nThanks for spotting the typo, yes the reference is typo free but we accidentally put in a wrong image. This is fixed it in our revision." ]
[ -1, 8, 7, 5, -1, -1, -1, -1 ]
[ -1, 5, 3, 5, -1, -1, -1, -1 ]
[ "S163sxgQf", "iclr_2018_H1vEXaxA-", "iclr_2018_H1vEXaxA-", "iclr_2018_H1vEXaxA-", "iclr_2018_H1vEXaxA-", "SyKh0W9eG", "Bk5lbEjxG", "HJnX0AFeG" ]
iclr_2018_rJvJXZb0W
An efficient framework for learning sentence representations
In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.
accepted-poster-papers
Though the approach is not terribly novel, it is quite effective (as confirmed on a wide range of evaluation tasks). The approach is simple and likely to be useful in applications. The paper is well written. + simple and efficient + high quality evaluation + strong results - novelty is somewhat limited
train
[ "rJMoj-jxf", "SJNPXFyeM", "ByVL483xf", "SkWHdKwzz", "BJq_m4imf", "SJPgI5BR-", "SJpU59rCW", "Sk1dtLqAW", "HkdCWV0Ab", "BJidT3DCb", "SyctLIIC-", "BkzJW9QC-", "SJbLL_zCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "[REVISION]\n\nThank you for your clarification. I appreciate the effort and think it has improved the paper. I have updated my score accordingly\n\n====== \n\nThis paper proposes a new objective for learning SkipThought-style sentence representations from corpora of ordered sentences. The algorithm is much faster than SkipThoughts as it swaps the word-level decoder for a contrastive classification loss. \n\nComments:\n\nSince one of the key advantages of this method is the speed, I was surprised there was not a more formal comparison of the speed of training different models. For instance, it would be more convincing if two otherwise identical encoders were trained on the same machine on the books corpus with the proposed objective and the skipthoughts decoding objective, and the representations compared after X hours of training. The reported 2 weeks required to train Skipthoughts comes from the paper, but things might be faster now with more up-to-date deep learning libraries etc. If this was what was in fact done, then it's probably just a case of presenting the comparison in a more formal way. I would also lose the sentence \"we are able to train many models in the time it takes to train most unsupervised\" (see next point for reasons why this is questionable).\n\nIt would have been interesting to apply this method with BOW encoders, which should be even faster than RNN-based encoders reported in this paper. The faster BOW models tend to give better performance on cosine-similarity evaluations ( quantifying the nearest-neighbour analysis that the authors use in this paper). Indeed, it would be interesting (although of course not definitive) to see comparison of the proposed algorithm (with BOW and RNN encoders) on cosine sentence similarity evaluations. \n\nThe proposed novelty is simple and intuitive, which I think is a strength of the method. However, a simple idea makes overlap with other proposed approaches more likely, and I'd like the author to check through the public comments to ensure that all previous related ideas are noted in this paper. \n\nI think the authors could do more to emphasise what the point is of trying to learn sentence embeddings. An idea of the eventual applications of these embeddings would make it easier to determine, for instance, whether the supervised ensembling method applied here would be applicable in practice. Moreover, many papers have emphasised the limitations of the evaluations used in this paper (although they are still commonly used) so it would be good to acknowledge that it's hard to draw too many conclusions from such numbers. That said, the numbers are comparable Skipthoughts, so it's clear that this method learns representations of comparable quality. \n\nThe justification for the proposed algorithm is clear in terms of efficiency, but I don't think it's immediately clear from a semantic / linguistic point of view. The statement \"The meaning of a sentence is the property that creates bonds....\" seems to have been cooked up to justify the algorithm, not vice versa. I would cut all of that speculation out and focus on empirically verifiable advantages. \n\nThe section of image embeddings comes completely out of the blue and is very hard to interpret. I'm still not sure I understand this evaluation (short of looking up the Kiros et al. paper), or how the proposed model is applied to a multi-modal task.\n\nThere is much scope to add more structured analysis of the type hinted by the nearest neighbours section. Cherry picked lists don't tell the reader much, but statistics or more general linguistic trends can be found in these neighbours and aggregated, that could be very interesting. \n\n", "==Update==\n\nI appreciate the response, and continue to recommend acceptance. The evaluation metric used in this paper (SentEval) represents an important open problem in NLP—learning reusable sentence representations—and one of the problems in NLP best suited to presentation at IC*LR*. Because of this, I'm willing to excuse the fact that the paper is only moderately novel, in light of the impressive reported results.\n\nWhile I would appreciate a direct (same codebase, same data) comparison with some outside baselines, this paper meets or exceeds the standards for rigor that were established by previous published work in the area, and the existing results are sufficient to support some substantial conclusions.\n\n==========\n\nThis paper proposes an alternative formulation of Kiros's SkipThought objective for training general-purpose sentence encoder RNNs on unlabeled data. This formulation replaces the decoder in that model with a second encoder, and yields substantial improvements to both speed and model performance (as measured on downstream transfer tasks). The resulting model is, for the first time, reasonably competitive even with models that are trained end-to-end on labeled data for the downstream tasks (despite the requirement, imposed by the evaluation procedure, that only the top layer classifier be trained for the downstream tasks here), and is also competitive with models trained on large labeled datasets like SNLI. The idea is reasonable, the topic is important, and the results are quite strong. I recommend acceptance, with some caveats that I hope can be addressed.\n\nConcerns:\n\nA nearly identical idea to the core idea of this paper was proposed in an arXiv paper this spring, as a commenter below pointed out. That work has been out for long enough that I'd urge you to cite it, but it was not published and it reports results that are far less impressive than yours, so that omission isn't a major problem.\n\nI'd like to see more discussion of how you performed your evaluation on the downstream tasks. Did you use the SentEval tool from Conneau et al., as several related recent papers have? If not, does your evaluation procedure differ from theirs or Kiros's in any meaningful way?\n\nI'm also a bit uncomfortable that the paper doesn't directly compare with any baselines that use the exact same codebase, word representations, hyperparameter tuning procedure, etc.. I would be more comfortable with the results if, for example, the authors compared a low-dimensional version of their model with a low-dimensional version of SkipThought, trained in the *exact* same way, or if they implemented the core of their model within the SkipThought codebase and showed strong results there.\n\nMinor points:\n\nThe headers in Table 1 don't make it all that clear which additions (vectors, UMBC) are cumulative with what other additions. This should be an easy fix. \n\nThe use of the check-mark as an output in Figure 1 doesn't make much sense, since the task is not binary classification.\n\n\"Instead of training a model to reconstruct the surface form of the input sentence or its neighbors, our formulation attempts to focus on the semantic aspects of sentences. The meaning of a sentence is the property that creates bonds between a sequence of sentences and makes it logically flow.\" – It's hard to pin down exactly what this means, but it sounds like you're making an empirical claim here: semantic information is more important than non-semantic sources of variation (syntactic/lexical/morphological factors) in predicting the flow of a text. Provide some evidence for this, or cut it.\n\nYou make a similar claim later in the same section: \"In figure 1(a) however, the reconstruction loss forces the model to predict local structural information about target sentences that may be irrelevant to its meaning (e.g., is governed by grammar rules).\" This is a testable prediction: Are purely grammatical (non-semantic) variations in sentence form helpful for your task? I'd suspect that they are, at least in some cases, as they might give you clues as to style, dialect, or framing choices that the author made when writing that specific passage.\n\n\"Our best BookCorpus model (MC-QT) trains in just under 11hrs, compared to skip-thought model’s training time of 2 weeks.\" – If you say this, you need to offer evidence that your model is faster. If you don't use the same hardware and low-level software (i.e., CuDNN), this comparison tells us nearly nothing. The small-scale replication of SkipThought described above should address this issue, if performed.\n", "This paper proposes a framework for unsupervised learning of sentence representations by maximizing a model of the probability of true context sentences relative to random candidate sentences. Unique aspects of this skip-gram style model include separate target- and context-sentence encoders, as well as a dot-product similarity measure between representations. A battery of experiments indicate that the learned representations have comparable or better performance compared to other, more computationally-intensive models.\n\nWhile the main constituent ideas of this paper are not entirely novel, I think the specific combination of tools has not been explored previously. As such, the novelty of this paper rests in the specific modeling choices and the significance hinges on the good empirical results. For this reason, I believe it is important that additional details regarding the specific architecture and training details be included in the paper. For example, how many layers is the GRU? What type of parameter initialization is used? Releasing source code would help answer these and other questions, but including more details in the paper itself would also be welcome.\n\nRegarding the empirical results, the method does appear to achieve good performance, especially given the compute time. However, the balance between performance and computational complexity is not investigated, and I think such an analysis would add significant value to the paper. For example, I see at least three ways in which performance could be improved at the expense of additional computation: 1) increasing the candidate pool size 2) increasing the corpus size and 3) increasing the embedding size / increasing the encoder capacity. Does the good performance/efficiency reported in the paper depend on achieving a sweet spot among those three hyperparameters?\n\nOverall, the novelty of this paper is fairly low and there is still substantial room for improvement in some of the analysis. On the other hand, I think this paper proposes an intuitive model and demonstrates good performance. I am on the fence, but ultimately I vote to accept this paper for publication.", "We thank the reviewers for the helpful comments.\n\nR1, R3: Skip-thoughts training time\nWe agree that training the model could be faster with current hardware and software libraries. A more recent implementation of the skip-thoughts model was released by Google early this year [1]. This implementation mentions that the model takes 9 days to train on a GTX 1080 GPU. Training our proposed models on a GTX 1080 takes 11 hours. Both implementations are based on Tensorflow. Our experiment used cuda 8.0 and cuDNN 6.0 libraries. This also agrees with the numbers in the paper which were based on experiments using GTX TITAN X.\n\nR1, R3: Training speed comparison\nWe performed a comparison on the training efficiency of lower-dimensional versions of our model and the skip-thoughts model. The same encoder architecture was trained in identical conditions using our objective and the skip-thoughts objectives and models were evaluated on downstream tasks after a given number of hours. Experimental results are reported in section C of the appendix. The training efficiency of our model compared to the skip-thoughts model is clear from these experiments.\n\nR1: BoW encoders, sentence similarity evaluations\nWe train BoW encoders using our training objective and evaluate them on textual similarity tasks. Experiments and results are discussed in section B of the appendix. Our RNN-based encoder performs strongly against prior sequence models. Our BoW encoder performs comparably to (or slightly better than) popular BoW representations as well.\n\nR2: Balance between performance and computational complexity\n1) Increasing the candidate pool size - We found that RNN encoders are less sensitive to increasing the candidate pool size. Sentences appearing in the context of a given query sentence are natural candidates for the contrastive sentences since they are more likely to be related to the query sentence, and hence make the prediction problem challenging. We observed marginal performance improvements as we added more random choices to the candidate pool.\n2) Increasing corpus size - We have experiments in the paper with increased corpus size. We considered the UMBC corpus (which is about 3 times the size of BookCorpus) and show that augmenting the BookCorpus dataset enables us to obtain monotonic improvements on the downstream tasks.\n3) Increasing embedding size - We have included experiments on varying the embedding size in section D of the supplementary material. We are able to train bigger and better models at the expense of more training time. The smaller models can be trained more efficiently while still being competitive or better than state-of-the-art higher-dimensional models. \nWe also plan to release pre-trained models for different representation sizes so that other researchers/practitioners can use the appropriate size depending on the downstream task and the amount of labelled data available.\n\nR1: Point of learning sentence representations\nIn the vision community it has become common practice to use CNN features (e.g., AlexNet, VGGNet, ResNet, etc.) pre-trained from the large-scale imagenet database for a variety of downstream tasks (e.g., the image-caption experiment in our paper uses pre-trained CNN features as the image embedding). Our overarching goal is to learn analogous high-quality sentence representations in the text domain. The representations can be used as feature vectors for downstream tasks, as we do in the paper. The encoders can also be used for parameter initialization and fine-tuned on data relevant to a particular application. In this respect, we believe that exploring scalable unsupervised learning algorithms for learning ‘universal’ text representations is an important research problem.\n\nR1: Image-caption retrieval experiments\nWe have updated the description of the image-caption retrieval experiments. We hope the description is more clear now and provides better motivation for the task.\n\nR1: Nearest neighbors\nAs we discuss in the paper, the query sentences used for the nearest neighbor experiment were chosen randomly and not cherry picked. We hope the cosine similarity experiments quantify the nearest neighbor analysis.\n\nR2: We have added more details about the architecture and training to the paper (sec 4.3).\n\nWe will release the source code upon publication.\n\nR3: Evaluation\nThe evaluation on downstream tasks was performed using the evaluation scripts from Kiros et al. since most of the unsupervised methods we compare against were published either before (Kiros et al., Hill et al.) or about the same time (Gan et al.) the SentEval tool was released.\n\nR1, R2, R3:\nWe have updated the paper to reflect your comments and concerns. Modifications are highlighted in blue (Omissions not shown). We have added relevant citations pointed out by reviewers and public comments.\n\nReferences\n[1] https://github.com/tensorflow/models/tree/master/research/skip_thoughts", "Thank you for your comment.\n\nWe have included an evaluation of our models on the STS14 task in Appendix C of the supplementary material. \n\nWe evaluate RNN-based and Bag-of-words encoders trained using our objective on this task. Our RNN-based encoder performs strongly compared to previous sequence encoders. Bag-of-words models are known to perform strongly in this task as they are better able to encode word identity information. Our BoW variation performs comparably (or slightly better) than prior BoW based models such as FastSent and Siamese CBOW.", "Thank you for your comments. We will include these literature in a revised version of the paper. \n\nDespite similarities in the objective functions, we would like to point out the following key distinctions.\n\nJernite et al. propose to use paragraph level coherence as a learning signal. The following related task is considered in their paper. Given the first three sentences of a paragraph, they choose the next sentence from five candidate sentences later in the paragraph (Paragraphs of length at least 8 are considered). \nOur objective differs from theirs in the following aspects.\n* This work exploits paragraph level coherence signals for learning, while our work derives motivation from the distributional hypothesis. We don’t restrict ourselves to paragraphs in the data as is done in this work. \n* We consider a large number of candidate sentence choices when predicting a context sentence. This is a discriminative approximation to the generation objective (viewing generation as choosing a sentence from all possible sentences)\n* We use a single input sentence and predict the context sentences surrounding it. Using larger input contexts did not yield any significant empirical benefits.\nOur objective further learns richer representations compared to this work, as evidenced by empirical results. \n\nThe local coherence model of Li & Hovy is a feed-forward network which examines a window of sentence embeddings and classifies them as coherent/incoherent (binary classification). We have some discussion about this objective in the paper (section 3). We point out the following key differences between our objective and theirs. \n* Instead of discriminating context windows as plausible/implausible, we encourage observed contexts (in the data) to be more plausible than contrastive (implausible) ones and formulate it as a multi-class classification problem. We experimentally found that this relaxed constraint helps learn better representations.\n* We use a simple scoring function (inner products) in our objective. When using a parameterized classifier, the model has a tendency to learn poor sentence representations and compensate for it using a strong classifier. This is undesirable since the classifier is discarded and only the sentence encoders are used for feature extraction.\n\nHence, Li & Hovy’s objective is better suited for local coherence modeling than it is for learning sentence representations.\n", "Thank you for your comment. We will include the paper in a revised version. Please see our response to the previous comment regarding the same paper. ", "Sure, Thanks. \n\nIn this paper a conceptually similar task of identifying context sentences from candidate sentences based on their bag-of-words representations is considered. Our approach is more general than this work in the following ways\n* Our formulation considers more general scoring functions/classifiers. We found inner products to work best. Using cosine distance as is done in this work led to inferior representations. Cosine distance implicitly requires sentence representations to both lie on the unit ball and be similar (in terms of inner product) to context sentences, which can be a strong constraint. The inner products scoring function only requires the latter. \n* This work uses the same set of parameters to encode both input and context sentences, while we consider using different sets of parameters. This helped learn better representations. We briefly discuss this choice in section 3.\n* Our formulation also allows the use of more general encoder architectures.\n\nAlso, we discuss more recent bag-of-words methods in the paper. \n", "Thanks for your reply.\n\nThere is an interesting difference between the evaluation tasks and the evaluation task used in Siamese CBOW.\n\nIn Siamese CBOW, they mainly focused on unsupervised evaluation tasks, including STS12, 13, and 14. The similarity of 2 sentences is determined by Cosine-similarity, which matches their training objective. Compared with FastSent which applies the dot-product as training objective in FastSent, Siamese CBOW seems to get better results.\n\nCould you also evaluate your proposed model on unsupervised evaluation tasks, like STS14? It would be good to have a comprehensive evaluation of your model. Thanks!", "Just out of curiosity, do you have any results on how the quantity of unlabeled training data you use impacts model performance?", "Thank you for your reply! Could you also compare your idea with Siamese CBOW? (ACL2016)\n\nhttp://www.aclweb.org/anthology/P16-1089\n\nThanks again!", "https://arxiv.org/pdf/1705.00557.pdf\n\nThe proposed method in the listed paper is quite close to the one proposed in this submission. I think it'll be good to cite this listed paper and discuss it. (Although I know it is not required to cite arxiv papers.)\n\n(Also, I am not related to the listed arxiv paper, but I'd love to some comprehensive comparisons among existing methods.)", "This article proposed a framework to learn sentence representation and demonstrated some good results.\n\nIn terms of the main objective of this task - predicting the next sentence out of a group of sampled sentences has already been proposed multiple times in the NLP community. For example, it appeared earlier this year: https://arxiv.org/abs/1705.00557, and a much earlier work (in 2014) has also used sentence ordering to learn sentence representation: http://web.stanford.edu/~jiweil/paper/emnlp_coherence-v2eh.pdf\n\nI am certain this paper brings unique value and insight into this training objective, and is a much-needed addition to the existing pool of literature. I just hope maybe in a revised version of this paper, the author(s) would reference these previous NLP works.\n\n(To clarify on my intent: I am not related to any of these papers, but would love to see NLP researches get recognized.)" ]
[ 6, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJvJXZb0W", "iclr_2018_rJvJXZb0W", "iclr_2018_rJvJXZb0W", "iclr_2018_rJvJXZb0W", "HkdCWV0Ab", "SJbLL_zCb", "BkzJW9QC-", "SyctLIIC-", "Sk1dtLqAW", "iclr_2018_rJvJXZb0W", "SJpU59rCW", "iclr_2018_rJvJXZb0W", "iclr_2018_rJvJXZb0W" ]
iclr_2018_S1sqHMZCb
NerveNet: Learning Structured Policy with Graph Neural Networks
We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent's policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting.
accepted-poster-papers
An interesting application of graph neural networks to robotics. The body of a robot is represented as a graph, and the agent’s policy is defined using a graph neural network (GNNs/GCNs) over the graph structure. The GNN-based policy network perform on par with best methods on traditional benchmarks, but shown to be very effective for transfer scenarios: changing robot size or disabling its components. I believe that the reviewers' concern that the original experiments focused solely on centepedes and snakes were (at least partially) addressed in the author response: they showed that their GNN-based model outperforms MLPs on a dataset of 2D walkers. Overall: -- an interesting application -- modeling robot morphology is an under-explored direction -- the paper is well written -- experiments are sufficiently convincing (esp. after addressing the concerns re diversity and robustness).
train
[ "BkjfRYLxz", "r1r7Vd9gf", "Hy24AAnlM", "HyaBB7SmM", "SkNdxmSmf", "rkn4g7S7f", "ryPylXB7G", "S1FRPUgWG", "SJ8gfJebG", "rJm-vMaxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper proposes NerveNet to represent and learn structured policy for continuous control tasks. Instead of using the widely adopted fully connected MLP, this paper uses Graph Neural Networks to learn a structured controller for various MuJoco environments. It shows that this structured controller can be easily transferred to different tasks or dramatically speed up the fine-tuning of transfer.\n\nThe idea to build structured policy is novel for continuous control tasks. It is an exciting direction since there are inherent structures that should be exploited in many control tasks, especially for locomotion. This paper explores this less-studied area and demonstrates promising results.\n\nThe presentation is mostly clear. Here are some questions and a list of minor suggestions:\n1) In the Output Model section, I am not sure how the controller is shared. It first says that \"Nodes with the same node type should share the instance of MLP\", which means all the \"joint\" nodes should share the same controller. But later it says \"Two LeftHip should have a shared controller.\" What about RightHip? or Ankle? They all belongs to the same node type \"joint\". Am I missing something here? It seems that in this paper, weights sharing is an essential part of the structured policy, it would be great if it can be described in more details.\n\n2) In States Update of Propagation Model Section, it is not clear how the aggregated message is used in eq. (4).\n\n3) Typo in Caption of Table 1: CentipedeFour not CentipedeSix.\n\n4) If we just use MLP but share weights among joints (e.g. the weights from observation to action of all the LeftHips are constrained to be same), how would it compare to the method proposed in this paper?\n\nIn summary, I think that it is worthwhile to develop structured representation of policies for control tasks. It is analogue to use CNN that share weights between kernels for computer vision tasks. I believe that this paper could inspire many follow-up work. For this reason, I would recommend accepting this paper.", "The submission proposes incorporation of additional structure into reinforcement learning problems. In particular, the structure of the agent's morphology. The policy is represented as a graph neural network over the agent's morphology graph and message passing is used to update individual actions per joint.\n\nThe exposition is fairly clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. However, the paper could benefit from being shorter by moving some details to the appendix (such as much of section 2.1 and PPO description).\n\nRelated work section could consider the following papers:\n\n\"Discrete Sequential Prediction of Continuous Actions for Deep RL\"\nAnother approach that outputs actions per joint, although in a general manner that does not require morphology structure\n\n\"Generalized Biped Walking Control\"\nConsiders the task of interactively changing limb lengths (your size transfer task) in a zero-shot manner, albeit with a non-neural network controller\n\nThe experimental results investigate the effects of various algorithm parameters, which is appreciated. However, a wider range of experiments would have been helpful to judge the usefulness of the proposed policy representation. In addition to robustness to limb length and disability perturbations, it would have been very nice to see multi-task learning that takes advantage of body structure (such as learning to reach for target with arms while walking with legs and being able to learn those independently, for example).\n\nHowever, I do think using agent morphology is an under-explored idea and one that is general, since we tend to have access to this structure in continuous control tasks for the time being. As a result, I believe this submission would be of interest to ICLR community.", "The authors present an interesting application of Graph Neural Networks to learning policies for controlling \"centipede\" robots of different lengths. They leverage the non-parametric nature of graph neural networks to show that their approach is capable of transferring policies to different robots more quickly than other approaches. The significance of this work is in its application of GNNs to a potentially practical problem in the robotics domain. The paper suffers from some clarity/presentation issues that will need to be improved. Ultimately, the contribution of this paper is rather specific, yet the authors show the clear advantage of their technique for improved performance and transfer learning on some agent types within this domain.\n\nSome comments:\n- Significant: A brief statement of the paper's \"contributions\" is also needed; it is unclear at first glance what portions of the work are the authors' own contributions versus prior work, particularly in the section describing the GNN theory.\n- Abstract: I take issue with the phrase \"are significantly better than policies learned by other models\", since this is not universally true. While there is a clear benefit to their technique for the centipede and snake models, the performance on the other agents is mostly comparable, rather than \"significantly better\"; this should be reflected in the abstract.\n- Figure 1 is instructive, but another figure is needed to better illustrate the algorithm (including how the state of the world is mapped to the graph state h, how these \"message\" are passed between nodes, and how the final graph states are used to develop a policy). This would greatly help clarity, particularly for those who have not seen GNNs before, and would make the paper more self-contained and easier to follow. The figure could also include some annotated examples of the input spaces of the different joints, etc. Relatedly, Sec. 2.2.2 is rather difficult to follow because of the lack of a figure or concrete example (an example might help the reader understand the procedure without having to develop an intuition for GNNs).\n- There is almost certainly a typo in Eq. (4), since it does not contain the aggregated message \\bar{m}_u^t.\n\nSmaller issues / typos:\n- Abstract: please spell out spell out multi-layer perceptrons (MLP).\n- Sec 2.2: \"servers\" should be \"serves\"\n- \"performance By\" on page 4 is missing a \".\"\n\nPros:\n- The paper presents an interesting application of GNNs to the space of reinforcement learning and clearly show the benefits of their approach for the specific task of transfer learning.\n- To the best of my knowledge, the paper presents an original result and presents a good-faith effort to compare to existing, alternative systems (showing that they outperform on the tasks of interest).\n\nCons:\n- The contributions of the paper should be more clearly stated (see comment above).\n- The section describing their approach is not \"self contained\" and is difficult for an unlearned reader to follow.\n- The problem the authors have chosen to tackle is perhaps a bit \"specific\", since the performance of their approach is only really shown to exceed the performance on agents, like centipedes or snakes, which have this \"modular\" quality.\n\nI certainly hope the authors improve the quality of the theory section; the poor presentation here brings down the rest of the paper, which is otherwise an easy read.", "We thank all reviewers for their valuable comments and suggestions. We here address the common concerns/suggestions and summarize the modifications in the latest revision.\n\nWe first emphasize our contributions. In our work, we propose a model that exploits structure priors for continuous reinforcement learning. We show competitive performance on standard tasks, and focus on showing the model’s ability to perform transfer learning to different agents. To the best of our knowledge, we are the first to address transfer learning between different agents for continuous control tasks, whose even simplest sub-problems have been beyond the ability of the best models we have right now. \n\n\n1. Multi-task Learning\nOne common concern among the reviewers is the lack of more diverse transfer experiments. We address this concern by performing an extensive set of multi-task experiments in our latest revision. In particular, we trained one single network to control a broad range of diverse agents.\n\nWe create a Walker task-set which contains five 2d walkers. They have very different dynamics, from single legged hopper to two-legged ostrich with a tail and neck. Specifically, Walker-HalfHumanoid and Walker-Hopper are variants of Walker2d and Hopper, respectively, in the original MuJoCo Benchmarks. On the other hand, Walker-Horse (two-legged), Walker-Ostrich (two-legged), and Walker-Wolf (four-legged) are agents mimicking real animals. Just like real-animals, some of the agents have tails and a neck to help them to balance. The detailed schematic figures are in the appendix.\n\nWe refer to training separate models of different weights for each agent as single-task learning, and sharing weights across multiple agents as multi-task learning. The results including our method and other competitors, e.g., MLP, are listed below:\n\nTABLE 1\n Model | HalfHum| Hopper | Ostrich | Wolf | Horse | Average\n\n MLP | Reward | 1775.75 | 1369.6 | 1198.9 | 1249.23 | 2084.1 | /\n MLP | Ratio | 57.7% | 62.0% | 48.2% | 54.5% | 69.7% | 58.6%\n\nTreeNet | Reward | 237.81 | 417.27 | 224.1 | 247.03 | 223.34 | /\nTreeNet | Ratio | 79.3% | 98.0% | 57.4% | 141.2% | 99.2% | 94.8%\n\nNerveNet| Reward| 2536.52 | 2113.6 | 1714.6 | 2054.5 | 2343.6 | /\nNerveNet| Ratio | 96.3% | 101.8% | 98.8% | 105.9% | 106.4% | 101.8%\n\n(Ratio indicates “the reward of multi-task” / “the reward of single-task baseline”)\n\nFrom the results, we can see that our method significantly outperforms other models. The MLP models failed to learn a shared representation over the different tasks. Their performance drops significantly when shifting from single-task to multi-task learning, while the performance of the NerveNet remains the same. We also show training curves in the updated version of the paper.\n\n\n2. Robustness \nTo assess the model’s generalization ability, we added an experiment to evaluate how well the control policies can generalize from the training environment to slightly perturbed test environments, e.g. varying the mass or the torque of the walkers’ joints. \n\nAs pointed out by [1], the policy learned by MLP is very unstable and is typically overfit. Different from [1], where the authors improve the robustness via model ensembles, we show that NerveNet is able to improve robustness of the agent from the perspective of model’s structure, which means that NerveNet is able to improve robustness of the agent by exploiting priors and weight sharing in the model’s structure. \nIn this experiment, we perturbed the mass of the geometries (rigid bodies) in MuJoCo as well as the scale of the forces of the joints. We used the pretrained models with similar performance on the original task for both the MLP and NerveNet. We tested the performance in five agents from the “Walker” task set. The average performance is recorded in the figure below, and the specific details are summarized in the appendix of the latest paper revision.\nShown in the below table are the results of \"performance of perturbed agents\" / \"training performance\".\n\nTABLE 2\n Model | HalfHum | Hopper | Wolf | Ostrich | Horse | Average\n\nMass | MLP | 33.28% | 74.04% | 94.68% | 59.23% | 40.61% | 60.37%\nMass | NerveNet | 95.87% | 93.24% | 90.13% | 80.2% | 69.23% | 85.73%\n\nSTR | MLP | 25.96% | 21.77% | 27.32% | 30.08% | 19.80% | 24.99%\nSTR | NerveNet | 31.11% | 42.20% | 42.84% | 31.41% | 36.54% | 36.82%\n\nIn summary, we added (1) experiments on multi-task learning, (2) experiments on testing robustness, (3) included improved visualizations of the zero-shot learning experiments, and added (4) more details in the appendix, e.g., hyper-parameters, the schematic figures of the “Walker” task-set agents.\n\n[1] Towards Generalization and Simplicity in Continuous Control\n", "We thank the reviewer for the great suggestions regarding the quality of the paper, and we would like to bring your attention to the general comment above. We added new experiments in the latest revision.\n\nQ1: How does MLP with share weights among joints perform?\nA1: We name the variant proposed by the reviewer as MLP-Bind. Note that MLP-Bind and TreeNet are equivalent for the Snake agents, since the snakes only have one type of joint. We ran MLP-Bind for the zero-shot and fine-tuning experiments on centipedes. We summarize the results here: \n\n1. Zero-shot performances of MLP-Bind and MLPAA are very similar. Both models have limited performance in the zero-shot scenario. Attached below is a sample table for several transfer tasks in centipedes (full results in the appendix of the revised draft)\n\n2. For fine-tuning on ordinary centipedes from pretrained models, the performance is only slightly worse than when using MLP. In our experiment, in the two curves of transferring from CentipedeFour to CentipedeEight as well as CentipedeSix to CentipedeEight, MLP-Bind’s reward is 100-500 worse than MLPAA during fine-tuning.\n\n3. For the Crippled agents, the MLP-Bind agent is stuck at around 800 reward. This might be due to MLP-Bind not being able to efficiently exploit the information of crippled and well-functioning legs.\n\nFor the Average Reward:\n----------------------------------------------------------------------------\n Task | MLPAA | MLP-Bind | NerveNet \n----------------------------------------------------------------------------\n4to6 | 109.4 | 62.13 | 139.6\n4to8 | 18.2 | 24.62 | 44.3\n----------------------------------------------------------------------------\n6to8 | 21.1 | 235.97 | 1674.9\n6to10 | -42.4 | 18.65 | 940.0\n----------------------------------------------------------------------------\n4toCp06 | -5.1 | 11.47 | 47.6\n4toCp08 | 5.1 | 7.34 | 40.0\n----------------------------------------------------------------------------\n6toCp08 | 36.5 | 29.09 | 523.6\n6toCp10 | 12.8 | 8.32 | 504.0\n----------------------------------------------------------------------------\n\nFor the average distance the agents could run in one episode (see updated version for the details of average distance, which is another metrics to evaluate how well the agents perform.)\n----------------------------------------------------------------------------\n Task | MLPAA | MLP-Bind | NerveNet \n----------------------------------------------------------------------------\n4to6 | 545.3 | 62.13 | 577.3\n4to8 | 62.0 | 24.62 | 146.9\n----------------------------------------------------------------------------\n6to8 | 87.8 | 235.97 | 10612.6\n6to10 | -17.0 | 18.65 | 6343.6\n----------------------------------------------------------------------------\n4toCp06 | -22.5 | 11.47 | 91.1\n4toCp08 | -26.9 | 7.34 | 80.1\n----------------------------------------------------------------------------\n6toCp08 | 138.3 | 29.09 | 3117.3\n6toCp10 | 13.6 | 8.32 | 3230.3\n----------------------------------------------------------------------------\n\nThe details of experiments we performed are updated in the appendix of the latest version.\n\n\nQ2: How the controller is shared? (“In the Output Model section, I am not sure how the controller is shared. It first says that \"Nodes with the same node type should share the instance of MLP\", which means all the \"joint\" nodes should share the same controller.”)\nA2: We clarified the Output Model section in the latest version. \nNervnet: nodes of the same type (joint, root, body) share the same state update function, e.g., GRU weights. \nEvery node, regardless of its node type, shares the same output MLP instance.\n\n\nQ3: Typos and presentation regarding to Eq. (4).\nA3: We improved the clarity as per suggestions.\n", "We thank the reviewer for the great suggestion regarding multi-task learning.\n\nQ1: “However, a wider range of experiments would have been helpful to judge the usefulness of the proposed policy representation … it would have been very nice to see multi-task learning that takes advantage of body structure”\nA1: We added the multi-task learning experiment. Please see the general comment above with additional experiments.\n\n\nQ2: Moving some details to the appendix.\nA2: We revised the paper and shortened the model’s section.\n\n\nQ3: Adding references.\nA3: Thanks for pointing out these two references. We included them in the latest version.\n\nWe believe that the focus of “Discrete Sequential Prediction of Continuous Actions for Deep RL” paper (action space discretization) and the focus of our paper (using structure information) are different. Combining these ideas might further boost the performance of the agents.\n\nFor the second paper, we agree that model-based control has been well studied, and this paper should be cited.", "We thank the reviewer for the careful reading of our paper and suggestions.\n\nQ1: The problem the authors have chosen to tackle is perhaps a bit specific\nA1: Please see the general comment above with additional experiments.\n\n\nQ2: A brief statement of the paper's \"contributions\" is also needed.\nA2: We made the statement more clear in the latest version. Specifically, our main contribution is in exploring graph neural networks in reinforcement learning and investigating their ability to transfer structure. To the best of our knowledge, we are the first to address transfer learning for continuous control tasks. We also make small contributions on the model side, i. e. GNNs. In particular, we introduce node type and associate an instance of an update function with each type. This fits the RL setting very well and also increases the model’s capacity. \n\n\nQ3: Abstract: I take issue with the phrase \"are significantly better than policies learned by other models.\nA3: We agree and will modify the wording in the abstract. Our main claims were with respect to transferability, since our model has significant improvement in the zero-shot and transfer learning tasks.\n\n\nQ4: Another figure is needed to better illustrate the algorithm. Relatedly, Sec. 2.2.2 is rather difficult to follow because of the lack of a figure or concrete example\nA4: We added a new figure (Fig. 2 in the newest revision) to illustrate how the input state is constructed, how messages are passed between the nodes and how the final policy is being output.\n\n\nQ5: Typo in Eq. (4) and other minor issues.\nA5: Thanks for pointing this out. We corrected them in the latest version.\n", "Thanks for pointing out this paper!\nWe do think we can rephrase our model as a message passing neural network (MPNNs) except some subtle differences, like the message function could take representations of both head and tail of an edge as input arguments in MPNNs whereas ours only takes representation of head to compute the message.\nWe will add the discussion of these connections in the final version.", "I think it is fair to frame their model as a graph neural network, as it closely resembles the \"local transition function\" proposed in the original graph neural network paper (Gori et al., 2009): http://ieeexplore.ieee.org/document/4700287/\n\nThe only architectural difference that the authors propose here is to use a gated per-node update function right after the local transition function is evaluated - this mostly resembles the work from Li et al., 2015: https://arxiv.org/abs/1511.05493 (which is cited).", "Is it possible to write your model as a message passing neural network, as in http://proceedings.mlr.press/v70/gilmer17a.html ? It looks closely related and readers might benefit from any explicit connections that can be made." ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1sqHMZCb", "iclr_2018_S1sqHMZCb", "iclr_2018_S1sqHMZCb", "iclr_2018_S1sqHMZCb", "BkjfRYLxz", "r1r7Vd9gf", "Hy24AAnlM", "rJm-vMaxG", "rJm-vMaxG", "iclr_2018_S1sqHMZCb" ]
iclr_2018_HkMvEOlAb
Learning Latent Representations in Neural Networks for Clustering through Pseudo Supervision and Graph-based Activity Regularization
In this paper, we propose a novel unsupervised clustering approach exploiting the hidden information that is indirectly introduced through a pseudo classification objective. Specifically, we randomly assign a pseudo parent-class label to each observation which is then modified by applying the domain specific transformation associated with the assigned label. Generated pseudo observation-label pairs are subsequently used to train a neural network with Auto-clustering Output Layer (ACOL) that introduces multiple softmax nodes for each pseudo parent-class. Due to the unsupervised objective based on Graph-based Activity Regularization (GAR) terms, softmax duplicates of each parent-class are specialized as the hidden information captured through the help of domain specific transformations is propagated during training. Ultimately we obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets, with the highest accuracies reported to date in the literature.
accepted-poster-papers
The reviewers concerns regarding novelty and the experimental evaluation have been resolved accordingly and all recommend acceptance. I would recommend removing the term "unsupervised" in clustering, as it is redundant. Clustering is, by default, assumed to be unsupervised. There is some interest in extending this to non-vision domains, however this is beyond the scope of the current work.
train
[ "H1_YBgxZz", "r1Eovo2gf", "rkO0PUnlG", "r1aVRRsQG", "SymwYAomz", "HkMoBrhMf", "rkjjzrhzM", "ByAHGrhzz", "r1BObB2Mz", "SkSPQrwGf", "Sy0zGI8Mz", "BkV2yJNWG", "ByUAwgHJz", "Sk_7iiVJG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "This paper presents a method for clustering based on latent representations learned from the classification of transformed data after pseudo-labellisation corresponding to applied transformation. Pipeline: -Data are augmented with domain-specific transformations. For instance, in the case of MNIST, rotations with different degrees are applied. All data are then labelled as \"original\" or \"transformed by ...(specific transformation)\". -Classification task is performed with a neural network on augmented dataset according to the pseudo-labels. -In parallel of the classification, the neural network also learns the latent representation in an unsupervised fashion. -k-means clustering is performed on the representation space observed in the hidden layer preceding the augmented softmax layer. \n\nDetailed Comments:\n(*) Pros\n-The method outperforms the state-of-art regarding unsupervised methods for handwritten digits clustering on MNIST.\n-Use of ACOL and GAR is interesting, also the idea to make \"labeled\" data from unlabelled ones by using data augmentation.\n\n(*) Cons\n-minor: in the title, I find the expression \"unsupervised clustering\" uselessly redundant since clustering is by definition unsupervised.\n-Choice of datasets: we already obtained very good accuracy for the classification or clustering of handwritten digits. This is not a very challenging task.\nAnd just because something works on MNIST, does not mean it works in general. \nWhat are the performances on more challenging datasets like colored images (CIFAR-10, labelMe, ImageNet, etc.)?\n-This is not clear what is novel here since ACOL and GAR already exist. The novelty seems to be in the adaptation to GAR from the semi-supervised to the unsupervised setting with labels indicating if data have been transformed or not.\n\n\nMy main problem was about the lack of novelty. The authors clarified this point, and it turned out that ACOL and GAR have never published elsewhere except in ArXiv. The other issue concerned the validation of the approach on databases other than MNIST. The author also addressed this point, and I changed my scores accordingly. ", "This paper utilizes ACOL algorithm for unsupervised learning. ACOL can be considered a type of semi-supervised learning where the learner has access only to parent-class information (for example in digit recognition whether a digit is bigger than 5 or not) and not the sub-class information (number between 0-9). Given that in many applications such parent-class supervised information is not available, the authors of this paper propose domain specific pseudo parent-class labels (for example transformed images of digits) to adapt ACOL for unsupervised learning. The authors also modified affinity and balance term utilized in GAR (as part of ACOL algorithm) to improve it. The authors use multiple data sets to study different aspects of the proposed approach.\n\nI updated my scores based on the reviewers responses. It turned out that ACOL and GAR are also originally proposed by the same authors and was only published in arxiv! Because of the double-blind review nature of ICLR, I didn't know these ideas came from the same authors and is being published for the first time in a peer-reviewed venue (ICLR). So my main problem with this paper, lack of novelty, is addressed and my score has changed. Thanks to the reviewer for clarifying this.\n", "The paper is well written and clear. The main idea is to exploit a schema of semisupervised learning based on ACOL and GAR for an unsupervised learning task. The idea is to introduce the notion of pseudo labelling. \nPseudo labelling can be obtained by transformations of original input data.\nThe key point is the definition of the transformations. \nOnly whether the design of transformation captures the latent representation of the input data, the pseudo-labelling might improve the performance of the unsupervised learning task.\nSince it is not known in advance what might be a good set of transformations, it is not clear what is the behaviour of the model when the large portion of transformations are not encoding the latent representation of clusters.", "Even if we use the Bag of Words to represent the documents, the proposed approach still needs one or more useful transformations (except the non-transformation T1 in equation 8) to create other pseudo-classes as expressed in equation 8 in the manuscript. If you have any suggestions about the transformations that can be applied to the Bag of Words representation, we'd like to apply them during our studies for the future work on expanding this approach to the sequential domain.", "Following minor changes have been made in the Revision 2 of this article. Please also note that the same manuscript has been mistakenly submitted three times while uploading the Revision 2. So you can consider the latest submission in the Revision History as the Revision 2 during the pdf diff. \n\n1. We added the results of IMSAT (Hu et al., 2017) in Table 3 (Quantitative unsupervised clustering performance in terms of ACC score) for the comparison and revising the corresponding comments about this table. That is,\n\n\"Our approach statistically significantly outperforms all the contemporary methods that reported unsupervised clustering performance on MNIST except IMSAT (Hu et al., 2017) displaying very competitive performance with our approach, i.e. 98.32%(\u0006+/-0.08) vs. 98.40%(\u0006+/-0.40). However, results obtained on the SVHN dataset, i.e. 76.80%(\u0006+/-1.30) vs. 57.30%(\u0006+/-3.90), show that our approach statistically significantly outperforms IMSAT on this realistic dataset and defines the current state-of-the-art for unsupervised clustering on SVHN.\"\n\n2. We removed Table 4 (Quantitative unsupervised clustering performance in terms of NMI score) because, out of 9 approaches used for the performance comparison, only one of them reported NMI score. So, this table has been removed for the sake of simplicity as it introduces no further information than Table 3. \n\n------------------------------------------------------------------------------------------------------------\n\n[Hu et al., 2017] : Learning Discrete Representations via Information Maximizing Self-Augmented Training, \nauthor = {Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama},\nbooktitle = {Proceedings of the 34th International Conference on Machine Learning,\n {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}", "We wanted to thank the reviewers for their time spent on our article and both their encouraging remarks and valuable feedback. Two of the most critical comments in regards to novelty and choice of datasets were addressed separately in response to individual reviewers. In summary, the reported state-of-the-art results and the datasets chosen for this study (three, not one as one reviewer suggested) mirrors those published in reputable venues such as ICLR, ICML, NIPS, etc. for direct comparison. In addition, both the framework (GAR + ACOL) and its extension to unsupervised learning are completely novel and have not appeared in any peer-reviewed publication until now. More detailed explanations can be found under responses to each reviewer.", "We’d like to thank the reviewer for their encouraging remarks and feedback. The reviewer makes an excellent point on the impact of transformations in clustering accuracy – specifically for datasets where domain expertise is not readily available. We believe that for image clustering problems, the focus of this article, the proposed domain transformations work sufficiently well (state-of-the-art) based on comparative results with the recent literature on these datasets as laid out in the article. For other domains, the exploration of the effects and optimality of transformations represents the most immediate and honestly, exciting future work which will be addressed in subsequent articles as discussed in the final section.", "We’d like to thank the reviewer for their time spent reviewing the paper and their valuable feedback and comments. We wanted to address the two specific comments on the approach being incremental and the number of datasets.\n\nBoth methods described in the article, ACOL and GAR are completely novel and this paper, if chosen for publication, will be the very first time they appear in peer-reviewed literature (a major reason why we chose ICLR). Their adaptation to unsupervised settings – is also completely novel by extension with domain specific transformation a key factor in clustering performance. \n\nIn this paper, we have actually used three (not one as suggested by the reviewer) image datasets MNIST, USPS and SVHN for comparison chosen primarily for uniformity and clarity between this article and many other recent ones published in conferences just like this one, ICLR, ICML, NIPS etc. MNIST and USPS datasets (hand-written digits) might be seen as simple datasets since the existing methods in the literature of clustering have already achieved very good performances on these two datasets. However, they are still used very commonly for benchmarks, especially on significantly different approaches such as the one proposed here. More importantly, unlike semi-supervised and supervised settings, SVHN (a more realistic dataset with colored street view house numbers) still constitutes a very challenging task for unsupervised settings. This difficulty might be hard to observe in the first version of our paper as at the time of submission for ICLR 2018 we weren’t able to find any other work studying this dataset.\n\nThanks to one of the commenters on the paper, we looked at IMSAT[1] as the previous state-of-art approach for clustering on SVHN, which was very recently published in ICML 2017 (a month before the submission deadline for ICLR – a reason why it wasn’t included in the first version). They have also presented the performances of other approaches, such as DEC[2], on the challenging SVHN dataset. Please see the clustering performances of two approaches reported by IMSAT[1] compared to the proposed approach below:\n\nDEC: 11.9%(±0.40) \nIMSAT: 57.3%(±3.90) \nOur Approach: 76.8%(±1.30) \n\nWe will include this new article and believe this comparison would further reinforce the state-of-the-art capability and accurateness of our approach on a multitude of datasets. \n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n\n[1]: Learning Discrete Representations via Information Maximizing Self-Augmented Training, \nauthor = {Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama},\nbooktitle = {Proceedings of the 34th International Conference on Machine Learning,\n {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}\n\n[2]: Unsupervised Deep Embedding for Clustering Analysis},\n author = {Junyuan Xie, Ross B. Girshick, Ali Farhadi},\n booktitle = {Proceedings of the 33nd International Conference on Machine Learning,\n {ICML} 2016, New York City, NY, USA, June 19-24, 2016} \n", "We’d like to thank the reviewer for their time spending reviewing the paper and their valuable feedback and comments. We wanted to address the comments where we thought the reviewer could use some clarification especially in regards to the novelty of the approach.\n\nIn response to choice of wording in title:\n\nWe agree with the reviewer that the word unsupervised can be removed from the title while preserving the same meaning.\n\nIn response to the choice of datasets: \n\nWe believe the state-of-the-art performance of the proposed approach on the SVHN dataset may not have been properly emphasized in the article.\n\nIn this paper, we used three image datasets (not just MNIST as suggested by the reviewer) MNIST, USPS and SVHN for comparison chosen primarily for uniformity and clarity between this article and many other recent ones published in conferences just like this one, ICLR, ICML, NIPS etc. MNIST and USPS datasets (hand-written digits) might be seen as simple datasets since the existing methods in the literature of clustering have already achieved very good performances on these two datasets. However, they are still used very commonly for benchmarks, especially on significantly different approaches such as the one proposed here. More importantly, unlike semi-supervised and supervised settings, SVHN (a more realistic dataset with colored street view house numbers) still constitutes a very challenging task for unsupervised settings. This difficulty might be hard to observe in the first version of our paper as at the time of submission for ICLR 2018 we weren’t able to find any other work studying this dataset.\n\nThanks to one of the commenters on the paper, we looked at IMSAT[1] as the previous state-of-art approach for clustering on SVHN, which was very recently published in ICML 2017 (a month before the submission deadline for ICLR – a reason why it wasn’t included in the first version). They have also presented the performances of other approaches, such as DEC[2], on the challenging SVHN dataset. Please see the clustering performances of two approaches reported by IMSAT[1] compared to the proposed approach below:\n\nDEC: 11.9%(±0.40) \nIMSAT: 57.3%(±3.90) \nOur Approach: 76.8%(±1.30) \n\nWe will include this new article and believe this comparison would further reinforce the state-of-the-art capability and accurateness of our approach. \n\nFinally, to the best of our knowledge, clustering on the datasets such as CIFAR-10, labelMe, ImageNet based on their raw pixel values is not a very common practice in the literature of clustering as raw pixels are not suited for this goal with color information being dominant [1]. Existing approaches perform the clustering on the extracted features from these datasets. However, this approach doesn’t fit the proposed clustering technique in this paper, because transformations generating the pseudo classes are domain-specific and so they are directly applied on the input space. Therefore, generalizing the proposed clustering technique to these datasets requires an orthogonal challenge which we already identified as future work to study how to apply these domain-specific transformations that will present rich information for the clustering task at hand. \n\nIn response to the comments about novelty: \n\nWe don’t think this comment – quote “This is not clear what is novel here since ACOL and GAR already exist. The novelty seems to be in the adaptation to GAR from the semi-supervised to the unsupervised setting with labels indicating if data have been transformed or not” reflects the reality as both methods ACOL and GAR are completely novel and this paper, if chosen for publication, will be the very first time they appear in peer-reviewed literature (a major reason why we chose ICLR). Their adaptation to unsupervised settings – is also completely novel by extension with domain specific transformation a key factor in clustering performance. \n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n\n[1]: Learning Discrete Representations via Information Maximizing Self-Augmented Training, \nauthor = {Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama},\nbooktitle = {Proceedings of the 34th International Conference on Machine Learning,\n {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}\n\n[2]: Unsupervised Deep Embedding for Clustering Analysis},\n author = {Junyuan Xie, Ross B. Girshick, Ali Farhadi},\n booktitle = {Proceedings of the 33nd International Conference on Machine Learning,\n {ICML} 2016, New York City, NY, USA, June 19-24, 2016} \n", "Thanks for the reply. You mentioned that \"we haven't used the Reuters dataset (or any other text categorization dataset) because generalizing our approach to sequential domain requires a future work (and possibly a new article) \". However, as far as I know, DEC also report the results on Reuters datasets and you can use the Bag of Words to represent the documents. Hence, you do not need to use sequential methods and you can compare with the baselines directly.", "First of all, we would like to thank you for your comment.\n\nVery briefly, we haven't used the Reuters dataset (or any other text categorization dataset) because generalizing our approach to sequential domain requires a future work (and possibly a new article) to study the domain-specific transformations that will present a useful knowledge about the clustering task at hand on this domain. \n\nIn this paper, we used three image datasets MNIST, USPS and SVHN for the comparison. MNIST and USPS datasets (hand-written digits) might be seen as simple datasets since the existing methods in the literature of clustering have already achieved very good performances on these two datasets. However, unlike for semi-supervised and supervised settings, SVHN (street view house numbers - more realistic) is still a difficult and unsolved problem for the clustering task. This difficulty might be hard to observe in the first version of our paper as we couldn't find a method reporting clustering performance on SVHN to compare our approach at the time of submission for ICLR 2018.\n\nIMSAT[1] is the previous state-of-art approach for clustering on SVHN, which was very recently published on ICML 2017 (the reason we missed this method in the first version). They also presented the performances of other approaches, such as DEC[2], on the SVHN dataset. Please see the clustering performances of two approaches reported by IMSAT[1] and also our approach below.\n\nDEC: 11.9%(±0.40)\nIMSAT: 57.3%(±3.90)\nOur Approach: 76.8%(±1.30)\n\nWe think this comparison would help the reader better observe the capability and accurateness of our approach. \n\n\nBesides, please note the performances of DEC[2] and IMSAT[1] on Reuters. \nDEC: 67.3%(±0.20)\nIMSAT: 71.0%(±4.90)\n\nBy comparing the performances of these two models on the SVHN and on the Reuters datasets, we believe it is not totally wrong to say that SVHN presents a harder problem for clustering. We just want to note that not using the Reuters dataset for the comparison is not due to its complexity or largeness but due to the requirement for a future work for generalizing our approach to other domains such as text categorization. However, there is, of course, no guarantee that our approach will perform well on other domains. \n\nThank you very much for your feedback.\n\n[1]: Learning Discrete Representations via Information Maximizing Self-Augmented Training, \nauthor = {Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama},\nbooktitle = {Proceedings of the 34th International Conference on Machine Learning,\n {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}\n\n[2]: Unsupervised Deep Embedding for Clustering Analysis},\n author = {Junyuan Xie, Ross B. Girshick, Ali Farhadi},\n booktitle = {Proceedings of the 33nd International Conference on Machine Learning,\n {ICML} 2016, New York City, NY, USA, June 19-24, 2016} ", "All the datasets in the paper are quite simple. The paper should compare with other baselines on Reuters, since it is much bigger.", "First of all, we would like to thank you for your comment.\n\nWe'll cite this work in the upcoming revision. But I think revisions are currently not allowed as the review process is ongoing. \n\nSo as a quick answer to your comment, I can say that it's hard to observe any statistically significant differences between the performances of these two models on the MNIST dataset. That is, 98.4 (0.4) for IMSAT vs. 98.32%(±0.08) for our approach. But the SVHN dataset provides a more solid basis of comparison. While IMSAT achieves 57.3 (3.9) clustering accuracy on SVHN, our approach outperforms IMSAT by achieving 76.80%(±1.30). I think it's also worth noting the mechanical differences between these two approaches. For example, while IMSAT uses 960-dimensional GIST features for SVHN, our approach employs raw pixels (32x32x3). Besides, IMSAT uses VAT based (or RPT based) regularization while we adopt graph-based regularization.\n\nI think citing this work will further be helpful for the evaluation of our approach, as it also provides the performances of other approaches (like DEC) on the SVHN dataset.\n\nThank you very much for your feedback.\n", "I believe that you should also cite “Learning Discrete Representations via Information Maximizing Self-Augmented Training” (ICML 2017) http://proceedings.mlr.press/v70/hu17b.html.\nThis paper is closely related to your work and is also about unsupervised clustering using deep neural networks.\nAs far as I know, the proposed method, IMSAT, is the current state-of-the-art method in deep clustering (November 2017). Could you compare your results against their result?" ]
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkMvEOlAb", "iclr_2018_HkMvEOlAb", "iclr_2018_HkMvEOlAb", "SkSPQrwGf", "iclr_2018_HkMvEOlAb", "iclr_2018_HkMvEOlAb", "rkO0PUnlG", "r1Eovo2gf", "H1_YBgxZz", "Sy0zGI8Mz", "BkV2yJNWG", "iclr_2018_HkMvEOlAb", "Sk_7iiVJG", "iclr_2018_HkMvEOlAb" ]
iclr_2018_HJIoJWZCZ
Adversarial Dropout Regularization
We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by ``fooling'' a special domain classifier network. However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes. This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain. Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art.
accepted-poster-papers
The general consensus is that this method provides a practical and interesting approach to unsupervised domain adaptation. One reviewer had concerns with comparing to state of the art baselines, but those have been addressed in the revision. There were also issues concerning correctness due to a typo. Based on the responses, and on the pseudocode, it seems like there wasn't an issue with the results, just in the way the entropy objective was reported. You may want to consider reporting the example given by reviewer 2 as a negative example where you expect the method to fail. This will be helpful for researchers using and building on your paper.
test
[ "HJvZyW07M", "HJ4p6dFeG", "rJO3y_qgz", "Hy6M2mybG", "Sy8PFkAmM", "rkiADJZmz", "BkUZUXp7f", "HytVlE6mG", "ryU_S767z", "HkW5HWbQz", "rJSsGZbmG", "HyVjZdBGz", "H1TwZ_HMG", "SJB7-_BGM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "public", "public", "author", "author", "author" ]
[ "We have double checked our implementation and it was icorrect, so the error was only in the equation written in the original paper draft. Thus the notation error did not affect our experiments.\n\nThe codes are here. We are using Pytorch.\nThis is the minimized objective.\ndef entropy(self,output):\n prob = F.softmax(output)\n prob_m = torch.mean(prob,0)\n return torch.sum(prob_m*torch.log(prob_m+1e-6))\nprob is the output of the classifier, which is a matrix of MxC dimension. \nM indicates the number of samples in mini-batch and C is the number of classes.\nThus, we first calculate the marginal class probability. Then, we calculate the entropy \n", "(Summary)\nThis paper is about learning discriminative features for the target domain in unsupervised DA problem. The key idea is to use a critic which randomly drops the activations in the logit and maximizes the sensitivity between two versions of discriminators.\n\n(Pros)\nThe approach proposed in section 3.2 uses dropout logits and the sensitivity criterion between two softmax probability distributions which seems novel.\n\n(Cons)\n1. By biggest concern is that the authors avoid comparing the method to the most recent state of the art approaches in unsupervised domain adaptation and yet claims \"achieved state of the art results on three datasets.\" in sec5. 1) Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks, Bousmalis et al. CVPR17, and 2) Learning Transferrable Representations for Unsupervised Domain Adaptation, Sener et al. NIPS16. Does the proposed method outperform these state of the art methods using the same network architectures?\n2. I suggest the authors to rewrite the method section 3.2 so that the loss function depends on the optimization variables G,C. In the current draft, it's not immediately clear how the loss functions depend on the optimization variables. For example, in eqns 2,3,5, the minimization is over G,C but G,C do not appear anywhere in the equation. \n3. For the digits experiments, appendix B states \"we used exactly the same network architecture\". Well, which architecture was it?\n4. It's not clear what exactly the \"ENT\" baseline is. The text says \"(ENT) obtained by modifying (Springenberg 2015)\". I'd encourage the authors to make this part more explicit and self-explanatory.\n\n(Assessment)\nBorderline. The method section is not very well written and the authors avoid comparing the method against the state of the art methods in unsupervised DA.", "\nUnsupervised Domain adaptation is the problem of training a classifier without labels in some target domain if we have labeled data from a (hopefully) similar dataset with labels. For example, training a classifier using simulated rendered images with labels, to work on real images. \nLearning discriminative features for the target domain is a fundamental problem for unsupervised domain adaptation. The problem is challenging (and potentially ill-posed) when no labeled examples are given in the target domain. This paper proposes a new training technique called ADR, which tries to learn discriminative features for the target domain. The key idea of this technique is to move the target-domain features away from the source-domain decision boundary. ADR achieves this goal by encouraging the learned features to be robust to the dropout noise applied to the classifier.\n\nMy main concern about this paper is that the idea of \"placing the target-domain features far away from the source-domain decision boundary\" does not necessarily lead to *discriminative features* for the target domain. In fact, it is easy to come up with a counter-example: the target-domain features are far from the *source-domain* decision boundary, but they are all (both the positive and negative examples) on the same side of the boundary, which leads to poor target classification accuracy. The loss function (Equations 2-5) proposed in the paper does not prevent the occurrence of this counter-example.\n\nAnother concern comes from using the proposed idea in training a GAN (Section 4.3). Generating fake images that are far away from the boundary (as forced by the first term of Equation 9) is somewhat opposite to the objective of GAN training, which aims at aligning distributions of real and fake images. Although the second term of Equation 9 tries to make the generated and the real images similar, the paper does not explain how to properly balance the two terms of Equation 9. As a result, I am worried that the proposed method may lead to more mode-collapsing for GAN.\n\nThe experimental evaluation seems solid for domain adaptation. The semi-supervised GANs part seemed significantly less developed and might be weakening rather than strengthening the paper. \n\nOverall the performance of the proposed method is quite well done and the results are encouraging, despite the lack of theoretical foundations for this method. \n", "I think the paper was mostly well-written, the idea was simple and great. I'm still wrapping my head around it and it took me a while to feel convinced that this idea helps with domain adaptation. A better explanation of the intuition would help other readers. The experiments were extensive and show that this is a solid new method for trying out for any adaptation problem. This also shows how to better utilize task models associated with GANs and domain adversarial training, as used eg. by Bousmalis et al., CVPR 2017, or Ganin et al, ICML 2015, Ghifary et al, ECCV 2016, etc.\n\nI think important work was missing in related work for domain adaptation. I think it's particularly important to talk about pixel/image-level adaptations eg CycleGAN/DiscoGAN etc and specifically as those were used for domain adaptation such as Domain Transfer Networks, PixelDA, etc. Other works like Ghifary et al, 2016, Bousmalis et al. 2016 could also be cited in the list of matching distributions in hidden layers of a CNN.\n\nSome specific comments: \n\nSect. 3 paragraph 2 should be much clearer, it was hard to understand.\n\nIn Sect. 3.1 you mention that each node of the network is removed with some probability; this is not true. it's each node within a layer associated with dropout (unless you have dropout on every layer in the network). It also wasn't clear to me whether C_1 and C_2 are always different. If so, is the symmetric KL divergence still valid if it's minimizing the divergence of distributions that are different in every iteration? (Nit: capitalize Kullback Leibler)\n\nEq.3 I think the minus should be a plus?\n\nFig.3 should be improved, it wasn't well presented and a few labels as to what everything is could help the reader significantly. It also seems that neuron 3 does all the work here, which was a bit confusing to me. Could you explain that?\n\nOn p.6 you discuss that you don't use a target validation set as in Saito et al. Is one really better than the other and why? In other words, how do you obtain these fixed hyperparameters that you use? \n\nOn p. 9 you claim that the unlabeled images should be distributed uniformly among the classes. Why is that? ", "Your response below is ***As comment \"The wrong objective and ....\" tells, the objective term should maximize the entropy, not minimize it. The notation was wrong, but our implementation of the experiment was not incorrect. ***\n\nThe error is NOT \"minimize the entropy\", actually you indeed maximized the entropy of p(y|x_u) in the previous version because you minimize the negative of the entropy (min_C E_{x_u}\\sum_{k=1}^{K}p(y=k|x_u)\\log p(y|x_u) equals to max_C -E_{x_u}\\sum_{k=1}^{K}p(y=k|x_u)\\log p(y|x_u)).\nYour error that I pointed out is you used the wrong objective, which did not correspond to your claim \"distributed uniformly among the classes\". Your objective maximizes E_{x_u} H(p(y|x_u)) but the correct way to achieve \"distributed uniformly among the classes\" is to maximize the MARGINAL ENTROPY H(E_{x_u} p(y|x_u)) (see Eq.(6) in the paper CatGAN).\n\nBy the way, the revised objective in the paper is still not correct now (the sum over k is wrong). I do not believe your response \"our implementation of the experiment was not incorrect\" because your explanation of the error is a misunderstanding.\n", "* On p. 9 you claim that the unlabeled images should be distributed uniformly among the classes.\nFirst, the last term E_{x_u}\\sum_{k=1}^{K}p(y=k|x_u)\\log p(y|x_u) in your objective in Eq.(6) \nmin_C L(X_L,Y_L) + L_{adv}(X_u)- L_{adv}(X_g) + E_{x_u}\\sum_{k=1}^{K}p(y=k|x_u)\\log p(y|x_u)\nis the negative entropy term of each unlabeled data. Minimizing it (equivalently, maximize the entropy) can NOT achieve \"distributed uniformly among the classes\" as you claimed. The correct way is to maximize the entropy of the marginal class distribution H(E_{x_u}p(y|x_u)) = H(\\frac{1}{N}\\sum_{i=1}^{N}p(y|x^{i}_u)) as shown in Eq.(6) in the paper CatGAN( \"Unsupervised and semi-supervised learning with categorical generative adversarial networks\").\nIn fact, this term enforces the unlabeled data to locate near the decision boundary, which contradicts with the second term L_{adv}(X_u). Then the objective is self-contradictory, i.e., the second term is to make unlabeled data far away from the boundary and the last term is to make them near the boundary. Thus the performance is worse than your baseline (ImprovedGAN) on CIFAR-10, and far from the state-of-the-art results, e.g. NIPS 2017 (Good semi-supervised learning that requires a bad gan).\n", "As comment \"The wrong objective and ....\" tells, the objective term should maximize the entropy, not minimize it. The notation was wrong, but our implementation of the experiment was not incorrect. We added more discussion on whether our method works well for SSL tasks. As comment 2 tells, the objective we used in SSL can contradict with the other objective, which may degrade the performance of our method. We considered this point and changed some parts of our paper in SSL. \n\nWith regard to comment \"Forcing the GAN to not ...\", we do not have a theoretical analysis of why ADR may help SSL-GAN. From the results of the SSL experiments, we cannot conclude that our method is better than other state-of-the-art methods for SSL. We also need further theoretical analysis and improvement to construct a method that works well on SSL, but we have not yet. We changed our paper to emphasize this point.\n\nRevised parts are indicated by red characters.\n", "Thank you for finding and pointing out this error in our notation! Please see our detailed response below in the comment titled: Response to \"The wrong objective and ....\" and \"Forcing the GAN to not ...\"", "The paper you are referring to has not been accepted by any peer-reviewed conference or journal, and has only been posted very recently on arxiv. Therefore, we should not be obligated to compare to their reported results. We do include thorough comparisons with many recent methods in our paper. Also, their method utilizes various data augmentation, which we did not do in most settings (in adaptation for VisDA, we conducted random crops and flipping). ", "There are a lot of works on domain adaptation this year. For example, self-ensembling for domain adaptation (https://arxiv.org/pdf/1706.05208.pdf). And their results seem much better. It would be better to include these methods for comparison.", "\"If the goal is to train a GAN to mimic a distribution only, then our additional objective may not help, but if the goal is to learn features for semi-supervised learning, then our objective helps by forcing the GAN to not generate fake images near the boundary (ambiguous features).\"\n\nThe authors proposed to add their regularization term to (K+1)-class discriminator formulation of GAN-based SSL such as CatGAN (Springenberg 2015) and Improved GAN (Salimans et al. 2016). They argued that generated fake images away from the boundary can help SSL. However, recently there is some theoretical analysis in BadGAN(Dai et al. 2017) proving that improving the generalization over SSL need a \"bad\" generator to generate fake images near the boundary. It seems the conclusions of BadGAN and this paper are totally contradictory. Since BadGAN has some theoretical proofs to support their claim, could the authors give some proofs or analysis of why ADR may help SSL-GAN using fake images not near the boundary? Thanks.\n\nRef\nGood Semi-supervised Learning That Requires a Bad GAN (NIPS 2017)\n", "We uploaded an updated version of the paper with changes highlighted in blue.\n\nTo Reviewer 2\n1., My main concern about this paper is that the idea of \"placing the target-domain features far away from the source-domain decision boundary\" does not necessarily lead to *discriminative features* for the target domain. In fact, it is easy to come up with a counter-example: the target-domain features are far from the *source-domain* decision boundary, but they are all (both the positive and negative examples) on the same side of the boundary, which leads to poor target classification accuracy. The loss function (Equations 2-5) proposed in the paper does not prevent the occurrence of this counter-example.\n\nYes, we understand that there can be such a counter-example with our method. Note that we add a term that discourages target examples from being placed on one side of the boundary. However it is possible in theory that positive and negative examples switch labels, but we find that this does not occur in practice, and our method works well based on our experimental results.\n\n2., Another concern comes from using the proposed idea in training a GAN (Section 4.3). Generating fake images that are far away from the boundary (as forced by the first term of Equation 9) is somewhat opposite to the objective of GAN training, which aims at aligning distributions of real and fake images. Although the second term of Equation 9 tries to make the generated and the real images similar, the paper does not explain how to properly balance the two terms of Equation 9. As a result, I am worried that the proposed method may lead to more mode-collapsing for GAN.\nThe experimental evaluation seems solid for domain adaptation. The semi-supervised GANs part seemed significantly less developed and might be weakening rather than strengthening the paper. \n\nIf the goal is to train a GAN to mimic a distribution only, then our additional objective may not help, but if the goal is to learn features for semi-supervised learning, then our objective helps by forcing the GAN to not generate fake images near the boundary (ambiguous features).\n", "We uploaded an updated version of the paper with changes highlighted in blue.\n\nTo Reviewer 3 \n1, I think important work was missing in related work for domain adaptation. I think it's particularly important to talk about pixel/image-level adaptations eg CycleGAN/DiscoGAN etc and specifically as those were used for domain adaptation such as Domain Transfer Networks, PixelDA, etc. Other works like Ghifary et al, 2016, Bousmalis et al. 2016 could also be cited in the list of matching distributions in hidden layers of a CNN.\n\nWe will refer to such methods and compare with PixelDA as possible as we can. (Same question as Reviewer1, 1)\n\n2. Sect. 3 paragraph 2 should be much clearer, it was hard to understand.\n\nWe changed paragraph 2 of section 3.\n\n3. In Sect. 3.1 you mention that each node of the network is removed with some probability; this is not true. it's each node within a layer associated with dropout (unless you have dropout on every layer in the network). It also wasn't clear to me whether C_1 and C_2 are always different. If so, is the symmetric KL divergence still valid if it's minimizing the divergence of distributions that are different in every iteration? (Nit: capitalize Kullback Leibler)\n”It also wasn't clear to me whether C_1 and C_2 are always different”\n\n→C_1 and C_2 are not necessarily always different. C_1 and C_2 can be the same classifier. However, it rarely happens. \n “If so, is the symmetric KL divergence still valid if it's minimizing the divergence of distributions that are different in every iteration?”\n→Yes, we think it is valid. The generator tries to minimize the divergence. The divergence means the sensitivity to noise caused by dropout. The goal of minimizing it is to generate features that are insensitive to the dropout noise. We minimize the divergence of distributions that are different in almost every iteration. \n\n4. Eq.3 I think the minus should be a plus?\n\nNo. In Eq.3, we aim to maximize the sensitivity for classifiers. In this phase, the classifiers should be trained to be sensitive to the noise caused by dropout. Thus, the minus should be a minus. \n\n5. Fig.3 should be improved, it wasn't well presented and a few labels as to what everything is could help the reader significantly. It also seems that neuron 3 does all the work here, which was a bit confusing to me. Could you explain that?\n\nWe improved the presentation. Neuron 3 seems to be dominant in bottom row (our method. However, when comparing Neuron 3 and Column 6, the shape of boundary looks a little different because of the effect of other neurons. What we wanted to show here is that each neurons will learn different features by our method. We will improve our presentation.\n\nChange of paper\nAdd notation in Figure 3, add caption. \n\n6., On p.6 you discuss that you don't use a target validation set as in Saito et al. Is one really better than the other and why? In other words, how do you obtain these fixed hyperparameters that you use? \n\n\nThe main hyperparameter in our method is n, which indicates how many times to repeat Step 3 in our method. We set 4 in our experiments. Although we did not show in our experimental results, we tried other number such as 1,2,3. Through the experiment, we found that 4 works well in most settings. With regard to other hyperparameters, such as batch-size, learning rate, we used the ones that are common in other papers on domain adaptation. \nIf one uses a target val set (as in Saito et al.), then one assumes access to training labels on target, which we don’t want to assume in our setting.\n\n7. On p. 9 you claim that the unlabeled images should be distributed uniformly among the classes. Why is that?\n\nWe assumed that it is not desirable if unlabeled images are aligned with one class. We add this term following “Unsupervised and semi-supervised learning with categorical generative adversarial networks”. \n", "We uploaded an updated version of the paper with changes highlighted in blue.\n\nTo Reviewer 1\n1. By biggest concern is that the authors avoid comparing the method to the most recent state of the art approaches in unsupervised domain adaptation and yet claims \"achieved state of the art results on three datasets.\" in sec5. 1) Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks, Bousmalis et al. CVPR17, and 2) Learning Transferrable Representations for Unsupervised Domain Adaptation, Sener et al. NIPS16. Does the proposed method outperform these state of the art methods using the same network architectures?\n\nIn the updated version of our paper, we added new experimental results following the same setting as Bousmalis did (Table 1). Ours is slightly better on MNIST->USPS, but Bousmalis et al. don’t report on more difficult shifts where we achieve state of the art, as such SVHN->MNIST. In addition, we compared our method with Sener et al. NIPS16 in Table 1.\n\nChanges in the Paper\nIn Table 1, We added Sener NIPS16, for SVHN to MNIST. We also added results on MNIST to USPS to compare with Bousmalis CVPR 2016. Results of our method changed in the adaptation using USPS because we found a bug in preprocessing of USPS. According to the change, we replaced the graph of Fig4 (a)(b) and we changed the relevant sentences.\n\n2. I suggest the authors to rewrite the method section 3.2 so that the loss function depends on the optimization variables G,C. In the current draft, it's not immediately clear how the loss functions depend on the optimization variables. For example, in eqns 2,3,5, the minimization is over G,C but G,C do not appear anywhere in the equation. \n\nWe clarified notation of Eqns 2,3,5. \n\nChange of paper\nChange notation of Eqns 2,3,5.\n\n3. For the digits experiments, appendix B states \"we used exactly the same network architecture\". Well, which architecture was it?\n\nWe wanted to say that, for our baseline method, we used the same network architecture as our proposed method. We added this explanation.\n\nChange of paper.\nAdd sentence in the last of our appendix section (Digits Classification Training Detail).\n\n4. It's not clear what exactly the \"ENT\" baseline is. The text says \"(ENT) obtained by modifying (Springenberg 2015)\". I'd encourage the authors to make this part more explicit and self-explanatory.\n\nWe did explain it in the appendix, but we added sentences to make the method clearer.\n\nChange of paper\nAdd sentence in Section 2, Section 4.2. \n\n" ]
[ -1, 5, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Sy8PFkAmM", "iclr_2018_HJIoJWZCZ", "iclr_2018_HJIoJWZCZ", "iclr_2018_HJIoJWZCZ", "HytVlE6mG", "H1TwZ_HMG", "rJSsGZbmG", "rkiADJZmz", "HkW5HWbQz", "iclr_2018_HJIoJWZCZ", "HyVjZdBGz", "rJO3y_qgz", "Hy6M2mybG", "HJ4p6dFeG" ]
iclr_2018_r1lUOzWCW
Demystifying MMD GANs
We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.
accepted-poster-papers
This paper does an excellent job at helping to clarify the relationship between various, recently proposed GAN models. The empirical contribution is small, but the KID metric will hopefully be a useful one for researchers. It would be really useful to show that it maintains its advantage when the dimensionality of the images increases (e.g., on Imagenet 128x128).
train
[ "SJsGyNugf", "rkkFfN5gz", "rJOKM41-M", "H1kV32sXz", "BkP7k4QQG", "HJ291EQ7z", "B1WFJNQ7f", "SkdLkVmXz", "S1b4HDuez", "S1YDTsmgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper claims to demystify MMD-GAN, a generative adversarial network with the maximum mean discrepancy (MMD) as a critic, by showing that the usual estimator for MMD yields unbiased gradient estimates (Theorem 1). It was noted by the authors that biased gradient estimate can cause problem when performing stochastic gradient descent, as also noted previously by Bellemare et al. The authors also proposed a kernel inception distance (KID) as a quantitative evaluation metric for GAN. The KID is defined to be the squared MMD between inception representation of the distributions. In experiments, the authors compared the quality of samples generated by MMD-GAN with various kernels with the ones generated from WGAN-GP (Gulrajani et al., 2017) and Cramer GAN (Bellemare et al., 2017). The empirical results show the benefits of using the MMD on top of deep convolutional features. \n\nThe major flaw of this paper is that its contribution is not really clear. Showing that the expectation and gradient can be interchanged (Theorem 1) does not seem to provide sufficient significance. Unbiasedness of the gradient alone does not guarantee that training will be successful and that the resulting models will better reflect the underlying data distribution, as evident by other successful variants of GANs, e.g., WGAN, which employ biased estimate. Indeed, since the training process relies on a small mini-batch, a small bias could help counteract the potentially high variance of the gradient estimate. The key is rather a good balance of both bias and variance during the training process and a guarantee that the estimate is asymptotically unbiased wrt the training iterations. Lastly, I do not see how the empirical results would demystify MMD-GANs, as claimed by the paper.\n\nThe paper is clearly written. \n\nSome minor comments:\n\n- The proof of the main result, Theorem 1, should be placed in the main paper.\n- Page 7, 2nd paragraph: later --> layer", "The quality and clarity of this work are very good. The introduction of the kernel inception metric is well-motivated and novel, to my knowledge. With the mention of a bit more related work (although this is already quite good), I believe that this could be a significant resource for understanding MMD GANs and how they fit into the larger model zoo.\n\nPros\n - best description of MMD GANs that I have encountered\n - good contextualization of related work and descriptions of relationships, at least among the works surveyed\n - reasonable proposed metric (KID) and comparison with other scores\n - proof of unbiased gradient estimates is a solid contribution\n\nCons\n - although the review of related work is very good, it does focus on ~3 recent papers. As a review, it would be nice to see mention (even just in a list with citations) of how other models in the zoo fit in\n - connection between IPMs and MMD gets a bit lost; a figure (e.g. flow chart) would help\n - wavers a bit between proposing/proving novel things vs. reviewing and lacks some overall structure/storyline\n - Figure 1 is a bit confusing; why is KID tested without replacement, and FID with? Why 100 vs 10 samples? The comparison is good to have, but it's hard to draw any insight with these differences in the subfigures. The figure caption should also explain what we are supposed to get out of looking at this figure.\n\nSpecific comments:\n - I suggest bolding terms where they are defined; this makes it easy for people to scan/find (e.g. Jensen-Shannon divergence, Integral Probability Metrics, witness functions, Wasserstein distance, etc.) \n - Although they are common knowledge in the field, because this is a review it could be helpful to provide references or brief explanations of e.g. JSD, KL, Wasserstein distance, RKHS, etc.\n - a flow chart (of GANs, IPMs, MMD, etc., mentioning a few more models than are discussed in depth here, would be *very* helpful.\n - page 2, middle paragraph, you mention \"...constraints to ensure the kernel distribution embeddings remained injective\"; it would be helpful to add a sentence here to explain why that's a good thing.\n", "The main contribution of the paper is that authors extend some work of Bellemare: they show that MMD GANs [which includes the Cramer GAN as a subset] do possess unbiased gradients. They provide a lot of context for the utility of this claim, and in the experiments section they provide a few different metrics for comparing GANs [as this is a known tricky problem]. The authors finally show that an MMD GAN can achieve comparable performance with a much smaller network used in the discriminator.\n\nAs previously mentioned, the big contribution of the paper is the proof that MMD GANs permit unbiased gradients. This is a useful result; however, given the lack of other outstanding theoretical or empirical results, it almost seems like this paper would be better shaped as a theory paper for a journal. I could be swayed to accept this paper however if others feel positive about it.\n\n", "We just uploaded a new revision with a minor improvement: Table 3 now contains test set metrics for the LSUN bedrooms dataset, for context, as for the other tables. All GAN models we considered obtain higher Inception scores than the test set, highlighting the inappropriateness of the Inception score for this dataset.", "Thanks to all for their comments. We just posted a new revision addressing many of the comments, as well as the following general improvements:\n\nFirst, a note on the bias situation. After submission, we cleaned up the proof of unbiasedness and, in doing so, noticed that we were able to generalize it significantly. Our unbiasedness result now covers nearly all feedforward neural network structures used in practice, rather than just ReLU networks as before. We also realized in this process that, with very little extra work, we could cover not just MMD GANs but also WGANs and even original GANs (with bounded discriminator outputs to avoid the logs blowing up). This at first seems counterintuitive, since of course the Cramér GAN paper showed that Wasserstein has biased sample gradients. We have thus added a detailed description of the relationship to the theory section and to the new Appendix B. In short: with a fixed kernel, the MMD estimator is unbiased, but the estimator of the supremum over kernels of the MMD is biased. Likewise, with a fixed critic function, the Wasserstein estimator (such as it is) is unbiased, but the estimator of the supremum over critic functions (the actual Wasserstein) is biased. Thus the bias situation is more analogous between the two models than had been previously thought, which our paper now helps to substantially clarify.\n\nWe also cleaned up the experimental results somewhat, including new results on LSUN that we didn't have time to finish for the initial submission. While doing that, we also used the KID in a new way: to dynamically adapt the learning rate based on the similarity of the generator's model to the training set, using the relative similarity test of https://arxiv.org/abs/1511.04581. This is similar to popular schemes used in supervised learning based on validation set accuracy, and allows for less manual tuning of the learning rate decay (which can be very important, and differ between models).", "Thanks for your comments. We've posted a new revision addressing most of them; see also our separate comment describing other significant improvements.\n\n- Review of related work: thanks for the suggestion. We have added a brief section 2.4 with some more related work; we would be happy to add more if you have some other suggestions.\n\n- We have attempted to slightly clarify the description of IPMs in this revision, and will further consider better ways to do this.\n\n- KID/FID comparison figure: We agree that this difference is confusing. It was done because the standard KID estimator becomes biased when there are repeated points due to sampling with replacement, but of course when sampling 10,000 / 10,000 points without replacement, it is unsurprising that there is no variance in the estimate, so it made more sense for the point we were trying to make to evaluate FID with replacement. The difference in number of samples was due to the relatively higher computational expense of the FID (which requires the SVD of a several thousand dimensional-matrix), but we have increased that to the same number of samples as well. The figures look essentially identical changing either of these issues; we have changed to using a variant of the KID estimator which is still unbiased for samples with replacement and clarified the caption.\n\n- We have added a footnote on why injectivity of the distribution embeddings is desirable.", "Thanks for your comments, and please also see our comments about improvements in the new revision above.\n\nYou are certainly correct that an unbiased gradient does not guarantee that training will succeed; our recent revision also substantially clarifies the bias situation. However, in SGD the bias-variance tradeoff is somewhat different than the situation in e.g. ridge regression, where the regularization procedure adds some bias but also reduces variance enough that it is worthwhile. There doesn't seem to be any reason to think that the gradient variance is any higher for MMD GANs than for WGANs, and so a direct analogy doesn't quite apply. Also, when performing SGD, the biases of each step might add up over time, and so – as in Bellemare et al.'s example – following biased gradients is worth at least some level of concern.\n\nWith regards to the rest of the contribution: the title \"demystifying\" was intended more for the earlier parts of the paper, which elucidate the relationship of MMD GANs to other models and (especially in the revision) clarify the nature of the bias argument of Bellemare et al. The empirical results perhaps do not directly \"demystify,\" but rather bring another level of understanding of these models.", "Thanks for your comments. We do feel that this paper has contributions outside just the proof of unbiased gradients, in particular clarifying the relationship among various slightly-different GAN models, the KID score, and the new experimental results about success with smaller critic networks, which are of interest to the ICLR community.\n\nPlease also see our general comments about the new revision above, which includes substantial improvements.\n", "It's true that the model we consider here is also described in Section 5.5 of the latest version of Li et al. This result was not in the original version of the paper, however, and only appeared in the revised arXiv submission of November 6, a week and a half after the ICLR deadline; we were not aware of the new version until you pointed it out (thanks for doing so).\n\nThat said, the main point of our paper is not to propose yet another GAN variation (YAGAN?). The idea of using an MMD as critic with a kernel defined on deep convolutional features is not new, nor is the idea of regularizing the gradient of the critic witness function: we cite the papers where (to our knowledge) these ideas were first proposed. The point of our paper is to understand (and \"demystify\") the MMD GAN, and its relation with other integral probability metric-based GANs. In this direction, our new results are in three areas:\n\n* We clarify the relationship between MMD GANs and Cramér GANs, and the relationship of the MMD GAN critic and witness function to those of WGAN-GPs (thus explaining why the gradient penalty makes sense for the MMD GAN).\n\n* We formally show the unbiasedness of the gradient of the MMD estimator wrt the network parameters, in Theorem 1. This is our main theoretical result, and an important property to establish when the MMD is used as a GAN critic.\n\n* Our main new experimental finding is that MMD GANs seem to work about as well as WGAN-GPs that use much larger critic networks. Thus, for a given generator, MMD GANs will be simpler and faster to train than WGAN-GPs. Our understanding of why this happens is described in detail in the paper.\n\nAlong the way, we also proposed the KID score, which is a more natural metric of generative model convergence than the Inception score, and inherits many of the nice properties of the previously-proposed FID, but is much easier and more intuitive to estimate.", "What is the main difference between this paper and the MMD GAN paper [Li et al. 2017]. For my understanding, it is just the MMD GAN work which replaced clipping with gradient penalty. GP was also mentioned and tried in [Li et al. 2017] (last version on arXiv) https://arxiv.org/pdf/1705.08584.pdf " ]
[ 4, 7, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 2, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1lUOzWCW", "iclr_2018_r1lUOzWCW", "iclr_2018_r1lUOzWCW", "BkP7k4QQG", "iclr_2018_r1lUOzWCW", "rkkFfN5gz", "SJsGyNugf", "rJOKM41-M", "S1YDTsmgG", "iclr_2018_r1lUOzWCW" ]
iclr_2018_Hk5elxbRW
Smooth Loss Functions for Deep Top-k Classification
The top-k error is a common measure of performance in machine learning and computer vision. In practice, top-k classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-k classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-k optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require O((nk)) operations, where n is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of O(kn). Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k=5. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.
accepted-poster-papers
The submission proposes a loss surrogate for top-k classification, as in the official imagenet evaluation. The approach is well motivated, and the paper is very well organized with thorough technical proofs in the appendix, and a well presented main text. The main results are: 1) a theoretically motivated surrogate, 2) that gives up to a couple percent improvement over cross-entropy loss in the presence of label noise or smaller datasets. It is a bit disappointing that performance is limited in the ideal case and that it does not more gracefully degrade to epsilon better than cross entropy loss. Rather, it seems to give performance epsilon worse than cross-entropy loss in an ideal case with clean labels and lots of data. Nevertheless, it is a step in the right direction for optimizing the error measure to be used during evaluation. The reviewers uniformly recommended acceptance.
train
[ "HykoG7Oef", "BJbqjU0eM", "ryOmoYZZM", "HJfBTmUQz", "S1Ori7U7z", "SyFTqXImz", "H15E5Q8mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better.\n\nThe paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models. As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps.\n\nI have two main concerns with the presentation.\n\nA/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss. This can be done by evaluating elementary symmetric polynomials at well-chosen values.\n\nThe paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic. The paper also advances that GPU parallelization must be used to be able to efficiently train the network.\n\nThose claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful.\n\nThe paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value. I am not sure why this is of interest - can't the loss evaluation already be parallelized trivially over examples in a training/testing minibatch? I believe the paper could justify this approach better by providing a bit more insights as to why it is required. For instance:\n\n- What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials?\n- How do those compare with CE and L_{5, 1} with the proposed method?\n- Are numerical instabilities making this completely unfeasible? This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss.\n\n\nB/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework. Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting.\n\nAdditional comments:\n\n- Figure 2b: this visualization is confusing. This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1. This is not convincing. An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting.\n\n", "This paper made some efforts in smoothing the top-k losses proposed in Lapin et al. (2015). A family of smooth surrogate loss es was proposed, with the help of which the top-k error may be minimized directly. The properties of the smooth surrogate losses were studied and the computational algorithms for SVM with these losses function were also proposed. \n\nPros:\n1, The paper is well presented and is easy to follow.\n2, The contribution made in this paper is sound, and the mathematical analysis seems to be correct. \n3, The experimental results look convincing. \n\nCons:\nSome statements in this paper are not clear to me. For example, the authors mentioned sparse or non-sparse loss functions. This statement, in my view, could be misleading without further explanation (the non-sparse loss was mentioned in the abstract).\n", "This paper introduces a smooth surrogate loss function for the top-k SVM, for the purpose of plugging the SVM to the deep neural networks. The idea is to replace the order statistics, which is not smooth and has a lot of zero partial derivatives, to the exponential of averages, which is smooth and is a good approximation of the order statistics by a good selection of the \"temperature parameter\". The paper is well organized and clearly written. The idea deserves a publication.\n\nOn the other hand, there might be better and more direct solutions to reduce the combinatorial complexity. When the temperature parameter is small enough, both of the original top-k SVM surrogate loss (6) and the smooth loss (9) can be computed precisely by sorting the vector s first, and take a good care of the boundary around s_{[k]}.", "We thank the reviewer for the detailed comments. We answer each of the reviewer’s concerns:\n\n\nA/ \nThe reviewer rightly points out the two key aspects in the design of an efficient algorithm in our case: (i) numerical stability and (ii) speed. We have implemented the alternative Summation Algorithm (SA), and we have added a new section in the appendix to compare it to our method, on numerical stability and speed. On both aspects, experimental results demonstrate the advantages of the Divide and Conquer (DC) algorithm over SA in our use case.\n\nHere are some highlights of the discussion:\n(i) We emphasize the distinction between numerical accuracy and stability. To a large extent, high levels of accuracy are not needed for the training of neural network, as long as the directions of gradients are unaffected by the errors. Stability is crucial however, especially in our case where the evaluation of the elementary symmetric polynomials is prone to overflow. When the loss function overflows during training, the weights of the neural network diverge and any learning becomes impossible. \nWe discuss the stability of our method in Appendix D.2. In summary, the summation algorithm starts to overflow for tau <= 0.1 in single precision and 0.01 in double precision. It is worth noting that compensation algorithms are unlikely to help avoid such overflows (they would only improve accuracy in the absence of overflow). Our algorithm, which operates in log-space, is stable for any reasonable value of tau (it starts to overflow in single-float precision for tau lower than 1e-36).\n\n(ii) The reviewer is correct that the computation of the loss can be trivially parallelized over the samples of a minibatch, and this is exploited in our implementation. However we can push the parallelization further within the DC algorithm for each sample of a minibatch. Indeed, inside each recursion of the Divide-and-Conquer (DC) algorithm, all polynomial multiplications are performed in parallel, and there are only O(log(C)) levels of recursion. On the other hand, most of the operations of the summation algorithm are essentially sequential (see Appendix D.1) and do not benefit from the available parallelization capabilities of GPUs. We illustrate this with numerical timing of the loss evaluation on GPU, with a batch size of 256, k=5 and a varying number of classes C:\n\n \t C=100\tC=1,000 C=10,000 C=100,000\nSummation\t0.006\t0.062\t 0.627\t 6.258\nDC\t 0.011 \t0.018\t 0.024\t 0.146\n\nThis shows that in practice, parallelization of DC offers near logarithmic rather than linear scaling of C, as long as the computations are not saturating the device capabilities. \n\nB/ We believe that the differences between top-k classification and detection make it difficult to perform a fair comparison between the two methods. In particular, detection methods require significantly more annotation (label and set of bounding boxes per instance to detect) than top-k classification (single image-level label). Furthermore, detection models are most often pre-trained on classification and then fine-tuned on detection, which entangles the influence of both learning tasks on the resulting model.\n\nAdditional comments: We thank the reviewer for this useful suggestion. We have changed Figure 2.b) to visualize the sparsity of the derivatives on real data.\n", "We thank the reviewer for the feedback. In the abstract we mean the sparsity of the derivatives. We have changed statements accordingly in the paper. We would be grateful if the reviewers could indicate further sources of confusion in the paper, which we will correct in subsequent versions.\n", "We thank the reviewer for the feedback. Is the reviewer suggesting to select scores that are large enough to have a non-negligible impact on the value of the loss? If that is the case, this is indeed an interesting approach for an approximate algorithm if the exact computation happens to be too expensive in practice. In our case, we are able to perform exact evaluations of the elementary symmetric polynomials. We further point out that for such an approach, it may be more efficient to compute a chosen number of the largest scores rather than to perform a full sorting (time complexity in O(C) instead of O(C log C)).", "We thank all the reviewers for their helpful comments. We have revised the paper, with the following main changes:\n- Improved visualization in Figure 2, as suggested by Reviewer 1.\n- Comparison with the Summation Algorithm in a new Appendix D, as suggested by Reviewer 1. We demonstrate the practical advantages of the divide-and-conquer algorithm for our use cases on GPU.\n- Formal proof of Lemma 3 instead of a sketch of proof.\n- Improved results on top-5 error on ImageNet: with a better choice of the temperature parameter, we have improved the results of our method. Our method now obtains on-par performance with CE when all the data is available, and still outperforms it on subsets of the dataset.\n\n" ]
[ 6, 7, 8, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Hk5elxbRW", "iclr_2018_Hk5elxbRW", "iclr_2018_Hk5elxbRW", "HykoG7Oef", "BJbqjU0eM", "ryOmoYZZM", "iclr_2018_Hk5elxbRW" ]
iclr_2018_B1Lc-Gb0Z
Deep Learning as a Mixed Convex-Combinatorial Optimization Problem
As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large integrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete optimization goal is to find a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex approaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case. Empirically, we show that our algorithm improves classification accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator.
accepted-poster-papers
The submission proposes optimization with hard-threshold activations. This setting can lead to compressed networks, and is therefore an interesting setting if learning can be achieved feasibly. This leads to a combinatorial optimization problem due to the non-differentiability of the non-linearity. The submission proceeds to analyze the resulting problem and propose an algorithm for its optimization. Results show slight improvement over a recent variant of straight-through estimation (Hinton 2012, Bengio et al. 2013), called saturated straight-through estimation (Hubara et al., 2016). Although the improvements are somewhat modest, the submission is interesting for its framing of an important problem and improvement over a popular setting.
train
[ "SJ7YJpueM", "Byn3CAYlM", "BkOfh_eWM", "B168vlvMz", "HJDD0JDGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "The paper studies learning in deep neural networks with hard activation functions, e.g. step functions like sign(x). Of course, backpropagation is difficult to adapt to such networks, so prior work has considered different approaches. Arguably the most popular is straight-through estimation (Hinton 2012, Bengio et al. 2013), in which the activation functions are simply treated as identity functions during backpropagation. More recently, a new type of straight-through estimation, saturated STE (Hubara et al., 2016) uses 1[|z|<1] as the derivative of sign(z).\n\nThe paper generalizes saturated STE by recognizing that other discrete targets of each activation layer can be chosen. Deciding on these targets is formulated as a combinatorial optimization problem. Once the targets are chosen, updating the weights of each layer to minimize the loss on those targets is a convex optimization. The targets are heuristically updated through the layers, starting out the output using the proposed feasibility target propagation. At each layer, the targets can be chosen using a variety of search algorithms such as beam search.\n\nExperiments show that FTP often outperforms saturated STE on CIFAR and ImageNet with sign and quantized activation functions, reaching levels of performance closer to the full-precision activation networks.\n\nThis paper's ideas are very interesting, exploring an alternative training method to backpropagation that supports hard-threshold activation functions. The experimental results are encouraging, though I have a few questions below that prevent me for now from rating the paper higher.\n\nComments and questions:\n\n1) How computationally expensive is FTP? The experiments using ResNet indicate it is not prohibitively expensive, but I am eager for more details.\n\n2) Does (Hubara et al., 2016) actually compare their proposed saturated STE with the orignal STE on any tasks? I do not see a comparison. If that is so, should this paper also compare with STE? How do we know if generalizing saturated STE is more worthwhile than generalizing STE?\n\n3) It took me a while to understand the authors' subtle comparison with target propagation, where they say \"Our framework can be viewed as an instance of target propagation that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization.\" It seems that the difference is greater than explicitly stated, that prior target propagation used continuous optimization to set *continuous targets*. (One could imagine using continuous optimization to set discrete targets such as a convex relaxation of a constraint satisfaction problem.) Focusing on discrete targets gains the benefits of quantized networks. If I am understanding the novelty correctly, it would strengthen the paper to make this difference clear.\n\n4) On a related note, if feasible target propagation generalizes saturated straight through estimation, is there a connection between (continuous) target propagation and the original type of straight through estimation?\n\n5) In Table 1, the significance of the last two columns is unclear. It seems that ReLU and Saturated ReLU are included to show the performance of networks with full-precision activation functions (which is good). I am unclear though on why they are compared against each other (bolding one or the other) and if there is some correspondence between those two columns and the other pairs, i.e., is ReLU some kind of analog of SSTE and Saturated ReLU corresponds to FTP-SH somehow?", "This paper examines the problem of optimizing deep networks of hard-threshold units. This is a significant topic with implications for quantization for computational efficiency, as well as for exploring the space of learning algorithms for deep networks. While none of the contributions are especially novel, the analysis is clear and well-organized, and the authors do a nice job in connecting their analysis to other work. ", "The paper discusses the problem of optimizing neural networks with hard threshold and proposes a novel solution to it. The problem is of significance because in many applications one requires deep networks which uses reduced computation and limited energy. The authors frame the problem of optimizing such networks to fit the training data as a convex combinatorial problems. However since the complexity of such a problem is exponential, the authors propose a collection of heuristics/approximations to solve the problem. These include, a heuristic for setting the targets at each layer, using a soft hinge loss, mini-batch training and such. Using these modifications the authors propose an algorithm (Algorithm 2 in appendix) to train such models efficiently. They compare the performance of a bunch of models trained by their algorithm against the ones trained using straight-through-estimator (SSTE) on a couple of datasets, namely, CIFAR-10 and ImageNet. They show superiority of their algorithm over SSTE. \n\nI thought the paper is very well written and provides a really nice exposition of the problem of training deep networks with hard thresholds. The authors formulation of the problem as one of combinatorial optimization and proposing Algorithm 1 is also quite interesting. The results are moderately convincing in favor of the proposed approach. Though a disclaimer here is that I'm not 100% sure that SSTE is the state of the art for this problem. Overall i like the originality of the paper and feel that it has a potential of reasonable impact within the research community. \n\nThere are a few flaws/weaknesses in the paper though, making it somewhat lose. \n- The authors start of by posing the problem as a clean combinatorial optimization problem and propose Algorithm 1. Realizing the limitations of the proposed algorithm, given the assumptions under which it was conceived in, the authors relax those assumptions in the couple of paragraphs before section 3.1 and pretty much throw away all the nice guarantees, such as checks for feasibility, discussed earlier. \n- The result of this is another algorithm (I guess the main result of the paper), which is strangely presented in the appendix as opposed to the main text, which has no such guarantees. \n- There is no theoretical proof that the heuristic for setting the target is a good one, other than a rough intuition\n- The authors do not discuss at all the impact on generalization ability of the model trained using the proposed approach. The entire discussion revolves around fitting the training set and somehow magically everything seem to generalize and not overfit. \n", "Thank you for your review. We respond to each of your questions below.\n\n1) FTP-SH is no more expensive than backprop (in the same way that SSTE isn’t either, and SSTE is a special case of FTPROP-MB). The only added cost is that the soft hinge loss requires computing an exponential, which is slower than a max (i.e., the cost of computing a sigmoid vs. a ReLU), but this is a minor difference in compute time.\n\n2) In the experiments, Hubara et al. (2016) does not compare SSTE and STE directly, but in the text of the paper they report that “Not [saturating] the gradient when [the input] is too large significantly worsens performance.” This is also what we found in preliminary experiments, where the unsaturated STE is significantly worse than STE. Note, however, that STE is also a special case of our framework where the loss function is just loss(z, t) = -zt, so we generalize that as well (and pretty much any type of STE can be obtained by choosing different losses in our framework).\n\n3) Yes, this is a good point and correct. We will update the paper to make this fact more clear. Thank you.\n\n4) It’s possible, although if so it’s not an obvious connection, and we haven’t studied this issue in detail yet.\n\n5) Yes, good point. This is somewhat confusing, and we will clarify it in the paper and remove the bolding, since the goal isn’t really to compare them against each other (although it is mildly interesting that saturating the ReLU improves performance in some cases). There is no correspondence between those two columns and the other pairs; the formatting of the table is just unclear.", "Thank you for your review. We respond to each of your questions and comments below.\n\nBased on the quantization literature and other hard-threshold papers that we looked at and cited, SSTE is (by far) the most widely used method. It’s true that there are many variations of the straight-through estimator (STE), but we compare to the main one (SSTE), and don’t know of any that outperform SSTE. Note that neither STE nor SSTE has convergence guarantees (they’re biased estimators) but SSTE at least works well in practice.\n\nWhile we agree that it would be nice to have better guarantees for FTPROP-MB, it is typical in AI and combinatorial search (as you likely know) to start from a theoretically-justified approach and then use that to define a more heuristic approach that sacrifices those guarantees in favor of efficiently achieving the desired property (i.e., feasibility), as we do here. Since the problem we are solving is NP-complete and (most likely) hard to approximate, heuristics are unavoidable. By using the (soft) hinge loss at each layer, FTPROP-MB is implicitly trying to maximize “soft feasibility” of the network because of the correspondence between the hinge loss and margin maximization. \n\nFurther, while feasibility is important for understanding the solution we propose, giving it up is necessary to avoid overfitting. This is similar to the linear-separability property of the perceptron where the robust method for learning a perceptron is to use a hinge loss instead of the perceptron criterion. We intend to further study the properties of FTPROP and (soft) feasibility in the future.\n\nWe did not put the FTPROP-MB pseudocode in the main paper because it’s such a simple algorithm and we were running short on space, but we can move it to the main body.\n\nSpace limitations also precluded further discussions of generalization ability. We used standard approaches to avoid overfitting (L2 regularization, mini-batching, hinge vs. perceptron criterion, etc.), which we mention in the paper (but can make more clear) and which account for the good generalization performance." ]
[ 7, 7, 7, -1, -1 ]
[ 4, 4, 3, -1, -1 ]
[ "iclr_2018_B1Lc-Gb0Z", "iclr_2018_B1Lc-Gb0Z", "iclr_2018_B1Lc-Gb0Z", "SJ7YJpueM", "BkOfh_eWM" ]
iclr_2018_H1WgVz-AZ
Learning Approximate Inference Networks for Structured Prediction
Structured prediction energy networks (SPENs; Belanger & McCallum 2016) use neural network architectures to define energy functions that can capture arbitrary dependencies among parts of structured outputs. Prior work used gradient descent for inference, relaxing the structured output to a set of continuous variables and then optimizing the energy with respect to them. We replace this use of gradient descent with a neural network trained to approximate structured argmax inference. This “inference network” outputs continuous values that we treat as the output structure. We develop large-margin training criteria for joint training of the structured energy function and inference network. On multi-label classification we report speed-ups of 10-60x compared to (Belanger et al., 2017) while also improving accuracy. For sequence labeling with simple structured energies, our approach performs comparably to exact inference while being much faster at test time. We then demonstrate improved accuracy by augmenting the energy with a “label language model” that scores entire output label sequences, showing it can improve handling of long-distance dependencies in part-of-speech tagging. Finally, we show how inference networks can replace dynamic programming for test-time inference in conditional random fields, suggestive for their general use for fast inference in structured settings.
accepted-poster-papers
The submission modifies the SPEN framework for structured prediction by adding an inference network in place of the usual combinatorial optimization based inference. The resulting architecture has some similarity to a GAN, and significantly increases the speed of inference. The submission provides links between two seemingly different frameworks: SPENs and GANs. By replacing inference with a network output, the connection is made, but importantly, this massively speeds up inference and may mark an important step forward in structured prediction with deep learning.
train
[ "H12sn0dlf", "Sk0CEftxG", "HytdnPcgG", "rJKCC5GNM", "rJsf2997M", "rJl1nqq7f", "B1ixj9cmM", "H1-Y5qqXM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "= Quality = \nOverall, the authors do a good job of placing their work in the context of related research, and employ a variety of non-trivial technical details to get their methods to work well. \n\n= Clarity = \n\nOverall, the exposition regarding the method is good. I found the setup for the sequence tagging experiments confusing, tough. See more comments below.\n\n= Originality / Significance = \n\nThe paper presents a clever idea that could help make SPENs more practical. The paper's results also suggest that we should be thinking more broadly about how to using complicated structured distributions as teachers for model compression.\n\n= Major Comment =\n\nI'm concerned by the quality of your results and the overall setup of your experiments. In particular, the principal contribution of the sequence tagging experiments seems top be different than what is advertised earlier on in the paper. \n\nMost of your empirical success is obtained by taking a pretrained CRF energy function and using this as a teacher model to train a feed-forward inference network. You have have very few experiments using a SPEN energy function parametrization that doesn't correspond to a CRF, even though you could have used an arbitrary convnet, RNN, etc. The one exception is when you use the tag language model. This is a good idea, but it is pretrained, not trained using the saddle-point objective you introduce. In fact, you don't have any results demonstrating that the saddle-point approach is better than simpler alternatives.\n\nIt seems that you could have written a very different paper about model compression with CRFs that would have been very interesting and you could've have used many of the same experiments. It's unclear why SPENs are so important. The idea of amortizing inference is perhaps more general. My recommendation is that you either rebrand the paper to be more about general methods for amortizing structured prediction inference using model compression or do more fine-grained experiments with SPENs that demonstrate empirical gains that leverage their flexible deep-network-based energy functions.\n\n\n= Minor Comments = \n\n* You should mention 'Energy Based GANs\"\n\n* I don't understand \"This approach performs backpropagation through each step of gradient descent, permitting more stable training but also evidently more overfitting.\" Why would it overfit more? Simply because training was more stable? Couldn't you prevent overfitting by regularizing more?\n\n* You spend too much space talking about specific hyperparameter ranges, etc. This should be moved to the appendix. You should also add a short summary of the TLM architecture to the main paper body.\n\n* Regarding your footnote discussing using a positive vs. negative sign on the entropy regularization term, I recommend checking out \"Regularizing neural networks by penalizing confident output distributions.\"\n\n* You should add citations for the statement \"In these and related settings, gradient descent has started to be replaced by inference networks.\"\n\n* I didn't find Table 1 particularly illuminating. All of the approaches seem to perform about the same. What conclusions should I make from it?\n\n* Why not use KL divergence as your \\Delta function?\n\n* Why are the results in Table 5 on the dev data?\n\n* I was confused by Table 4. First of all, it took me a very long time to figure out that the middle block of results corresponds to taking a pretrained CRF energy and amortizing inference by training an inference network. This idea of training with a standard loss (conditional log lik.) and then amortizing inference post-hoc was not explicitly introduced as an alternative to the saddle point objective you put forth earlier in the paper. Second, I was very surprised that the inference network outperformed Viterbi (89.7 vs. 89.1 for the same CRF energy). Why is this?\n\n* I'm confused by the difference between Table 6 and Table 4? Why not just include the TLM results in Table 4?\n\n\n\n\n\n\n", "The paper proposes training ``inference networks,'' which are neural network structured predictors. The setup is analogous to generative adversarial networks, where the role of the discriminator is played by a structured prediction energy network (SPEN) and the generator is played by an inference network.\n\nThe idea is interesting. It could be viewed as a type of adversarial training for large-margin structured predictors, where counterexamples, i.e., structures with high loss and low energy, cannot be found by direct optimization. However, it remains unclear why SPENs are the right choice for an energy function.\n\nExperiments suggest that it can result in better structured predictors than training models directly via backpropagation gradient descent. However, the experimental results are not clearly presented. The clarity is poor enough that the paper might not be ready for publication.\n\nComments and questions:\n\n1) It is unclear whether this paper is motivated by training SPENs or by training structured predictors. The setup focuses on using SPENs as an inference network, but this seems inessential. Experiments with simpler energy functions seem to be absent, though the experiments are unclear (see below).\n\n2) The confusion over the motivation is confounded by the fact that the experiments are very unclear. Sometimes predictions are described as the output of SPENs (Tables 2, 3, 4, and 7), sometimes as inference networks (Table 5), and sometimes as a CRF (Tables 4 and 6). In 7.2.2 it says that a BiLSTM is used for the inference network in Twitter POS tagging, but Tables 4 and 6 indicate both CRFs and BiLSTMS? It is also unclear when a model, e.g., BiLSTM or CRF is the energy function (discriminator) or inference network (generator).\n\n3) The third and fourth columns of Table 5 are identical. The presentation should be made consistent, either with dev/test or -retuning/+retuning as the top level headers.\n\n4) It is also unclear how to compare Tables 4 and 5. The second to bottom row of Table 5 seems to correspond with the first row of Table 5, but other methods like slack rescaling have higher performance. What is the takeaway from these two tables supposed to be?\n\n5) Part of the motivation for the work is said to be the increasing interest in inference networks: \"In these and related settings, gradient descent has started to be replaced by inference networks. Our results below provide more evidence for making this transition.\" However, no other work on inference networks is directly cited.", "This paper proposes an improvement in the speed of training/inference with structured prediction energy networks (SPENs) by replacing the inner optimization loop with a network trained to predict its outputs.\n\nSPENs are an energy-based structured prediction method, where the final prediction is obtained by optimizing min_y E_theta(f_phi(x), y), i.e., finding the label set y with the least energy, as computed by the energy function E(), using a set of computed features f_phi(x) which comes from a neural network. The key innovation in SPENs was representing the energy function E() as an arbitrary neural network which takes the features f(x) and candidate labels y and outputs a value for the energy. At inference time y can be optimized by gradient descent steps. SPENs are trained using maximum-margin loss functions, so the final optimization problem is max -loss(y, y') where y' = argmin_y E(f(x), y).\n\nThe key idea of this paper is to replace the minimization of the energy function min_y E(f(x), y) with a neural network which is trained to predict the resulting output of this minimization. The resulting formulation is a min-max problem at training time with a striking similarity to the GAN min-max problem, where the y-predicting network learns to predict labels with low energy (according to the E-computing network) and high loss while the energy network learns to assign a high energy to predicted labels which have a higher loss than true labels (i.e. the y-predicting network acts as a generator and the E-predicting network acts as a discriminator).\n\nThe paper explores multiple loss functions and techniques to train these models. They seem rather finnicky, and the experimental results aren't particularly strong when it comes to improving the quality over SPENs but they have essentially the same test-time complexity as simple feedforward models while having accuracy comparable to full inference-requiring energy-based models. The improved understanding of SPENs and potential for further work justify accepting this paper.", "The new experiments sections is substantially better. It does a good job of providing separate analyses of the various contributions of the paper. Overall, there is definitely a wealth of follow-on work to be done in this area, and the ICLR community will appreciate this paper. ", "Thanks very much for the thoughtful review!\n\nRegarding your major comment, we will first mention that the revised version includes additional experimental results when using our framework to train a SPEN with a global energy that includes the tag language model (TLM) energy. These results are described in Sec. 7.2.4. \n\nWe agree that the original submission suffered from a bit of an identity crisis. As you mentioned, “The idea of amortizing inference is perhaps more general” and we intend to develop this direction in future work. Also, in the revised version, we restructured the sequence labeling section so as to more cleanly separate the discussion of training SPENs (Sec. 7.2.3) and exploring richer energy functions (Sec. 7.2.4) from the discussion of amortizing inference for pretrained structured predictors (Sec. 7.2.5). \n\nReplies to your minor comments are below:\n\n“Energy Based GANs”\nThanks -- we added a mention and citation to the Related Work section.\n\n“Why would it overfit more? Simply because training was more stable? Couldn't you prevent overfitting by regularizing more?”\n\nWe should have provided a citation for this. The github page hosting the SPEN code includes the claim: “the end-to-end approach fitting the training data much better is that it is more prone to overfitting”. As that method is from prior work, we do not know exactly what the cause is of the observed overfitting. It may be that it is caused by the increased capability of calculating precise gradients obtained by unrolling gradient descent into a computation graph, rather than merely performing gradient descent for inference in an offline manner. We clarified the above in Sec. 7.1.\n\n“Move hyperparameter ranges, etc to the appendix. Add summary of TLM architecture to the main paper”\n\nWe moved several tuning details to the appendix and moved the TLM description to the main body (Sec. 7.2.4).\n\n“footnote on using positive vs. negative sign on entropy regularization term”\n\nThanks for the pointer! We added a citation.\n\n“add citations for ‘gradient descent has started to be replaced by inference networks.’”\n\nGood point. We added relevant citations to that claim. \n\n“Table 1 not particularly illuminating. All of the approaches seem to perform about the same.”\n\nUsually, using cost-augmented inference for testing (with an SVM) gives really bad predictions. We were surprised to see that the final cost-augmented inference network performs well as a test-time inference network. This suggests that by the end of training, the cost-augmented network may be approaching the argmin. Nonetheless, since the differences are small there, we moved this table and discussion of this to the appendix.\n\n“Why are results in Table 5 on dev?”\nWe often reported results only on dev so as to avoid reporting too many configurations on the test set, in order to prevent us (and the community) from learning too much about what works best on the test set. \n\n“Why not use KL divergence as your \\Delta function?”\n\nIn classic max-margin structured prediction, \\Delta is a symmetric function, so we didn’t consider using KL divergence. But we could use JS divergence and we think an exploration of the choices here would be interesting future work. (Also, we could try using asymmetric \\Delta functions as there does not appear to be any strong theoretical motivation to use symmetric \\Delta functions (in our view); it appears to be mostly just a convention.) \n\n“confused by Table 4”\n\nThanks to the comments by you and the other reviewers, we heavily modified Table 4, splitting it into multiple simpler tables (see the new tables 4, 6, and 9).\n\n“very surprised that the inference network outperformed Viterbi (89.7 vs. 89.1 for the same CRF energy). Why is this?”\n\nThis is a good question. We added some speculation to Sec. 7.2.3 that we think is relevant to this question as well. In particular, the stabilization terms used when training the inference network may be providing a regularizing effect for the model. \n\n“confused by difference between Table 6 and Table 4”\n\nYes, we agree that was confusing. We restructured both tables. Please see the new tables 4, 5, and 6.", "Thanks for the questions. We agree with you that the experiment description was unclear in many places and we think the revised version is much improved in this regard. Specific answers to your numbered questions are below:\n\n1) The primary goal of the paper is to propose a framework to do training and inference with SPENs. We have rewritten the experimental results section to focus on evaluating this SPEN training/inference framework. It turns out that the framework can also be applied to simpler families of structured prediction models, so we also include experimental results for applying inference network training to CRFs (see Sec. 7.2.5 in the revised version). In the new version, we have tried to more cleanly separate the contributions for SPENs from the contributions to structured prediction more generally, by relegating the latter results to Sec. 7.2.5 only. \n\n2) All good points. We hope the revised version will help resolve all of these confusions. Please let us know if anything is still unclear.\n\n3) We restructured this table to remove redundant columns and make the presentation simpler.\n\n4) Thanks to the comments by you and the other reviewers, we heavily modified Table 4, splitting it into multiple simpler tables (see the new tables 4, 6, and 9). \n\n5) Good point. We added relevant citations to that claim.\n", "Thank you for the comments and the support!", "Thanks to the reviewers for the many comments and questions. We just posted a revised version that we think addresses many of them. In particular, we rewrote the sequence labeling experimental section (Sec. 7.2). We simplified the experimental settings in each table to make the results easier to understand. The results were admittedly very confusing, as we were combining different energy functions, training objectives, and inference network architectures, all in the same table. We hope we have corrected that with the new rewrite. We’ll give a quick summary of our changes below:\n\nTable 4 compares our tuned SPEN configuration (which we are now calling “SPEN (InfNet)” throughout) to off-the-shelf BLSTM and CRF baselines. The SPEN and CRF in that table use the same energy, namely the energy given in Eq. (13). These experiments allow us to show the impact of differences in training and the use of inference networks while keeping the form of the energy function fixed. \n\nBut, as multiple reviewers pointed out, the goal of SPENs is to use energies that go beyond what’s possible with traditional models like chain CRFs. We definitely agree with this. While we intend to pursue this more thoroughly in future work, we do feel that the tag language model (TLM) results are a promising step in this direction. In Sec. 7.2.4, we describe the tag language model energy and present results when adding it to the energy in Eq. (13) and training with our framework. \n\nThen, in Sec. 7.2.5, we describe experiments in training inference networks to do test-time inference for a pretrained, off-the-shelf CRF. These results were admittedly confusing in the original submission, but hopefully by separating them out and moving them to the end of the paper, it is now more clear. We agree with the reviewers that the approach described (of training inference networks to approximate prediction problems) does indeed apply beyond SPENs. While we did not have space to thoroughly explore this application in this submission, we hope that this small section of promising experimental results will help other researchers to see the potential of inference networks for structured prediction more broadly. " ]
[ 7, 5, 9, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1WgVz-AZ", "iclr_2018_H1WgVz-AZ", "iclr_2018_H1WgVz-AZ", "H12sn0dlf", "H12sn0dlf", "Sk0CEftxG", "HytdnPcgG", "iclr_2018_H1WgVz-AZ" ]
iclr_2018_rypT3fb0b
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive. This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value. Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers. However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network. In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL (group ordered weighted l1), which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value. GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates. Unlike standard sparsity-inducing regularizers (e.g., l1 a.k.a. Lasso), GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value. This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure. We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance.
accepted-poster-papers
The paper proposes to regularize via a family of structured sparsity norms on the weights of a deep network. A proximal algorithm is employed for optimization, and results are shown on synthetic data, MNIST, and CIFAR10. Pros: the regularization scheme is reasonably general, the optimization is principled, the presentation is reasonable, and all three reviewers recommend acceptance. Cons: the regularization is conceptually not terribly different from other kinds of regularization proposed in the literature. The experiments are limited to quite simple data sets.
val
[ "HJms-iOEz", "H1fONf_gG", "Skc-JodVf", "rkPj2vjeM", "BkP3B6U4M", "rkJfM20eG", "SJk7igVEM", "r1wvzD27z", "BkfXrD37G", "SyXDUD2mG", "rkwxPE67M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank the authors for their detailed response to my questions. The revision and response provided clearer explantation for the motivation of compressing a deep neural network. Additional experimental results were also included for the uncompressed neural net. I would like to change my rating based on these updates.", "This paper proposes to apply a group ordered weighted l1 (GrOWL) regularization term to promote sparsity and parameter sharing in training deep neural networks and hence compress the model to a light version.\n\nThe GrOWL regularizer (Oswal et al., 2016) penalizes the sorted l2 norms of the rows in a parameter matrix with corresponding ordered regularization strength and the effect is similar to the OSCAR (Bondell & Reich, 2008) method that encouraging similar (rows of) features to be grouped together.\n\nA two-step method is used that the regularizer is applied to a deep neural network at the initial training phase, and after obtaining the parameters, a clustering method is then adopted to force similar parameters to share the same values and then the compacted neural network is retrained. The major concern is that a much more complicated neural network (with regularizations) has already been trained and stored to obtain the uncompressed parameters. What’s the benefit of the compression and retraining the trained neural network?\n\nIn the experiments, the performance of the uncompressed neural network should be evaluated to see how much accuracy loss the regularized methods have. Moreover, since the compressed network loses accuracy, will a smaller neural network can actually achieve similar performance compared to the compressed network from a larger network? If so, one can directly train a smaller network (with similar number of parameters as the compressed network) instead of using a complex two-step method, because the two-step method has to train the original larger network at the first step.\n", "Thanks for the comments.", "SUMMARY\nThe paper proposes to apply GrOWL regularization to the tensors of parameters between each pair of layers. The groups are composed of all coefficients associated to inputs coming from the same neuron in the previous layer. The proposed algorithm is a simple proximal gradient algorithm using the proximal operator of the GrOWL norm. Given that the GrOWL norm tend to empirically reinforce a natural clustering of the vectors of coefficients which occurs in some layers, the paper proposes to cluster the corresponding parameter vectors, to replace them with their centroid and to retrain with the constrain that some vectors are now equal. Experiments show that some sparsity is obtained by the model and that together with the clustering and high compression of the model is obtained which maintaining or improving over a good level of generalization accuracy. In comparison, plain group Lasso yields compressed versions that are too sparse, and tend to degrade performance. The method is also competitive with weight decay with much better compression.\n\nREVIEW\nGiven the well known issue that the Lasso tends to select arbitrarily and in a non stable way variables\nthat are correlated *but* given that the well known elastic-net (and conceptually simpler than GrOWL) was proposed to address that issue already more than 10 years ago, it would seem relevant to compare the proposed method with the group elastic-net.\n\nThe proposed algorithm is a simple proximal gradient algorithm, but since the objective is non-convex it would be relevant to provide references for convergence guarantees of the algorithm.\n\nHow should the step size eta be chosen? I don't see that this is discussed in the paper.\n\nIn the clustering algorithm how is the threshold value chosen?\n\nIs it chosen by cross validation?\n\nIs the performance better with clustering or without?\n\nIs the same threshold chosen for GrOWL and the Lasso?\n\nIt would be useful to know which values of p, Lambda_1 and Lambda_2 are selected in the experiments?\n\nFor Figures 5,7,8,9 given that the matrices do not have particular structures that need to be visualized but that the important thing to compare is the distribution of correlation between pairs, these figures that are hard to read and compare would be advantageously replaced by histograms of the values of the correlations between pairs (of different variables). Indeed, right now one must rely on comparison of shades of colors in the thin lines that display correlation and it is really difficult to appreciate how much of correlation of what level are present in each Figure. Histograms would extract exactly the relevant information...\n\nA brief description of affinity propagation, if only in the appendix, would be relevant.\nWhy this method as opposed to more classical agglomerative clustering?\n\nA brief reminder of what the principle of weight decay is would also be relevant for the paper to more self contained.\n\nThe proposed experiments are compelling, except for the fact that it would be nice to have a comparison with the group elastic-net. \n\nI liked figure 6.d and would vote for inclusion in the main paper.\n\n\nTYPOS etc \n\n3rd last line of sec. 3.2 can fail at selecting -> fail to select\n\nIn eq. (5) theta^t should be theta^{(t)}\n\nIn section 4.1 you that the network has a single fully connected layer of hidden units -> what you mean is that the network has a single hidden layer, which is furthermore fully connected.\n\nYou cite several times Sergey (2015) in section 4.2. It seems you have exchanged first name and last name plus the corresponding reference is quite strange.\n\nAppendix B line 5 \", while.\" -> incomplete sentence.\n\n", "I would like to thank the authors for detailed responses to my comment. These responses and the updated version of the paper are compelling. I am therefore updating my rating. \n\nI am just confused by point (5): classical agglomerative clustering does not require to choose the number of clusters a priori (unlike k-means for example). The justification for the use of AP is still unclear to me. But this is a minor point.", "The authors propose to use the group ordered weighted l1 regulariser (GrOWL) combined with clustering of correlated features to select and tie parameters, leading to a sparser representation with a reduced parameter space. They apply the proposed method two well-known benchmark datasets under a fully connected and a convolutional neural network, and demonstrate that in the former case a slight improvement in accuracy can be achieved, while in the latter, the method performs similar to the group-lasso, but at a reduced computational cost for classifying new images due to increased compression of the network.\n\nThe paper is well written and motivated, and the idea seems fairly original, although the regularisation approach itself is not new. Like many new approaches in this field, it is hard to judge from this paper and its two applications alone whether the approach will lead to significant benefits in general, but it certainly seems promising.\n\nPositive points:\n- Demonstrated improved compression with similar performance to the standard weighted decay method.\n- Introduced a regularization technique that had not been previously used in this field, and that improves on the group lasso in terms of compression, without apparent loss of accuracy.\n- Applied an efficient proximal gradient algorithm to train the model.\n\nNegative points:\n- The method is sold as inducing a clustering, but actually, the clustering is a separate step, and the choice of clustering algorithm might well have an influence on the results. It would have been good to see more discussion or exploration of this. I would not claim that, for example, the fused lasso is a clustering algorithm for regression coefficients, even though it demonstrably sets some coefficients to the same value, so it seems wrong to imply the same for GrOWL.\n- In the example applications, it is not clear how the accuracy was obtained (held-out test set? cross-validation?), and it would have been good to get an estimate for the variance of this quantity, to see if the differences between methods are actually meaningful (I suspect not). Also, why is the first example reporting accuracy, but the second example reports error?\n- There is a slight contradiction in the abstract, in that the method is introduced as guarding against overfitting, but then the last line states that there is \"slight or even no loss on generalization performance\". Surely, if we reduce overfitting, then by definition there would have to be an improvement in generalization performance, so should we draw the conclusion that the method has not actually been demonstrated to reduce overfitting?\n\nMinor point:\n- p.5, in the definition of prox_Q(epsilon), the subscript for the argmin should be nu, not theta.\n\n", "While it may appear a bit like black magic, it is empirically true that often training a large network and compressing it will outperform a small network of the same size trained on the same data. The reference to Han et al. cited by the authors above explains this in more detail.", "We sincerely thank the reviewer for the positive feedback as well as the thoughtful and constructive comments. In order to address the main concerns, we updated our paper with the averaged results and the corresponding variances. \n\n(1) The clustering property of GrOWL is claimed as the ability to identify strong correlations among input features, by tying the corresponding weights/parameters to very close values (or to a common value for very strong correlations). In other words, GrOWL identifies clusters associated with correlated features, while a separate clustering algorithm is used to find the clusters formed by GrOWL. We modified our paper to clarify this. \n\nIt is true that the clustering algorithm may influence the results. However, in practice, we find the adopted clustering algorithm to be quite stable when used with relatively large threshold values to cluster the rows at the end of the initial training process (since our motivation is to only encourage parameter sharing among very similar rows, which correspond to highly correlated features). We also explore how the threshold values for the clustering algorithm can influence the clustering results in Table 4 (Appendix D). \n\n(2) Thank you for the suggestions; we have modified our paper accordingly.\n\n(i) We have updated all the results with average accuracies and the corresponding variance (Tables 1, 2, and 4). The reviewer is right: the difference in accuracy and memory trade-off does shrink in terms of average results (previously we reported the best result). However, as seen in Table 1, GrOWL still achieves better accuracy vs memory trade-off on MNIST, while for VGG-16 on CIFAR, the improvement is not obvious. We suspect the absence of improvement in this scenario can be due to the following two reasons: \n\n a) the CIFAR-10 dataset is simple, so that good performance can still be obtained after strong compression of large DNNs (VGG-16 in this paper). We believe this gap in the compression vs accuracy trade-off can be further increased in larger networks on more complex datasets. \n\n b) parameter sharing is performed among rows of the (reshaped) weight matrix, which might prevent the network from gaining more diversity. One possibility is to apply GrOWL within each neuron, by predefining each 2D convolution filter as a group (instead of all 2D filters corresponding to the same input features), so that within each neuron, the 2D filters that correspond to strongly correlated features will be tied together. By doing so, GrOWL can still identify the correlated features by tying the corresponding filters together, but more diversity of the neural network can be achieved since, for each neuron, the correlations between the input features can be different. \n\nWe leave the above two directions for future work. \n\n(ii) Also, in order to be consistent with each other, we report the testing accuracy for both MNIST and CIFAR10. The accuracies in this paper are reported with respect to a held-out test set. \n\n(3) Thank you for pointing out the contradiction. In the previous version, the conclusion “slight or even no loss on generalization performance” was drawn by comparing the accuracy obtained by applying GrOWL to train the neural network to the best accuracy on the same network achieved by the regularizers considered in this paper. We apologize for not making it clear; we corrected this in the revised abstract. \n", "We'd like to thank the reviewer for the questions. Below we further clarify our motivation and answer the questions raised. \n\n(1) We start by addressing the reviewer's major concern about the motivation for compressing the DNN and why we try to compress a large network instead of directly training a small one.\n\na) A modern DNN is typically very large, which results in a heavy burden on both computation and memory. Therefore, it’s very hard to deploy such large DNNs in embedded sensors or mobile devices, where computational and memory resources are scarce. This motivates working on DNN compression without significantly hurting the performance. Some approaches have been proposed in recent years: for example, Han et al [1] have shown that, after compression, a large DNN, such as AlexNet and VGGNet, can be stored in on-chip SRAM instead of off-chip DRAM. Please refer to the related work section of our paper for details. \n\n(b) Researchers have experienced difficulties in training small networks to achieve good generalization performance [2], this being one of the main reasons why DNNs gained popularity. Starting with a large NN, our compression method can be understood as automatically finding a small network that achieves comparable performance as a large one. Without this compression process, it will be very hard to achieve similar performance as the large network when directly training a small NN from scratch. \n\nTo further demonstrate this claim, we can take the CIFAR-10 as an example. The best performance obtained by running VGG on CIFAR-10 without compression is 93.4% (Table 2 in the paper). By using GrOWL with L2 or group-Lasso plus L2, we can compress it by almost 15X, achieving 92.7% accuracy. The total number of free parameters of the compressed network is 1.03e+06, versus 1.5E+07 of the original uncompressed VGG network. However, the best accuracy achieved by a small network (www.tensorflow.org/tutorials/deep_cnn) with a similar number of parameters (1.06E+06) is 86%. \n\n(2) Concerning the question of why we retrain the trained network, there are two main reasons for the proposed training-retraining pipeline. \n\na) Debiasing: it is well-known that L1-type regularization not only prunes away redundant parameters but also shrinks the surviving ones towards zero, thus yielding biased estimates thereof, which can harm the network’s performance. The retraining process helps to reduce this bias. \n\nb) Enforcing the learned tying structure associated with strongly correlated features: unlike standard sparsity-inducing regularizers (Lasso or group-Lasso), GrOWL not only eliminates unimportant parameters (neurons) by setting all the corresponding weights to zero but it also explicitly identifies strongly correlated neurons by tying the corresponding weights to be very close to each other or even exactly equal. This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure. \n\nThe strongly correlated features of a DNN can be generated by the noisy inputs or by the co-adaption tendency of DNN training [3]. Therefore, by enforcing parameter sharing among those parameters that correspond to strongly correlated features, we expect to alleviate the negative effects caused by either noisy input or co-adaption. \n\n(3) We have added the performance of uncompressed neural networks for both MNIST and CIFAR-10 experiment.\n\n\n[1] S. Han, H. Mao, and W. Dally. \"Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding\", International Conference on Learning Representations, 2016.\n\n[2] L. J. Ba and R. Caruana. \"Do deep nets really need to be deep?\" NIPS 2014.\n\n[3] N. Srivastava, G. E Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. \"Dropout: a simple way to prevent neural networks from overfitting\", Journal of Machine Learning Research, vol. 15, pp. 1929−1958, 2014.\n", "We sincerely thank the reviewer for the thoughtful comments and suggestions. In order to address the main concern, we updated our paper by including a comparison with group elastic net (group-EN). Below, we answer the questions one by one.\n\n(1) We agree that a comparison between GrOWL and group-EN (group-Lasso + L2) improves the paper. We updated the results for both MNIST and CIFAR10; we found that by applying either GrOWL or group Lasso with L2 regularization, the accuracy and memory trade-off improve. We didn’t observe any significant difference between the compression vs accuracy trade-off achieved by GrOWL + L2 and group-EN. However, GrOWL alone still yields a more obvious parameter sharing effect (Tables 1, 2, and 4) than group-EN. We suspect this is due to the inability of L2 + L1 regularization to encourage group-level parameter sharing. As seen in Tables 2 and 4, applying group-Lasso, with or without L2 regularization, doesn’t make a significant difference in parameter sharing, which may imply that, at a group level, both group-Lasso and group-EN tend to identify only a subset of the correlated features. \n\nWe should also mention that we adjusted the weight of the L2 norm in group-EN to yield the best compression vs accuracy trade-off. Although the parameter sharing effect for group-EN is improved slightly by using large weight for the L2 regularizer, the performance degrades as a consequence. \n\n(2) In order to identify more correlations, we choose large lambda_2 (ranging from 0.05 to 0.1). Preventing the zeroing out of all the parameters is achieved by using a relatively small step size for all the networks. We use step sizes of 1e-3 and 1e-2 for the fully connected neural network (NN) on MNIST and the VGG on CIFAR-10, respectively. We included the choices of step size in the corresponding section of the paper. \n\n(3) By using a comparatively larger preference value, we only cluster identical or very similar rows together and enforce parameter sharing therein. Consequently, those regularizers that do not identify correlations in the same way, including group-Lasso and group-EN, almost no parameter sharing (see Table 2) is enforced and the retraining process serves essentially as a debiasing phase, which helps to improve the generalization performance. On the other hand, although it has been proved in the linear regression case (Theorem 1 in [1]) that GrOWL only yields strictly equal rows if they correspond to identical features, we didn’t observe any significant difference in accuracy between retraining with and without enforcing parameter sharing. This justifies our motivation to enforce parameter sharing only among the parameter groups that correspond to strongly correlated input features. However, for GrOWL, we did expect an improvement by enforcing parameter sharing among those parameters that correspond to strongly correlated features in the retraining process. Our intuition is that the sharing (averaging) process alleviates the negative effects caused by the noisy inputs or the co-adaptation effect. Further exploring the reasons underlying the absence of improvement when retraining while enforcing parameter sharing in this scenario is an interesting direction for future work. \n\n(4) We run the clustering algorithm over different preference values and choose the one that works well for all of the regularizers. We use preference value 0.6 for MNIST and 0.8 for CIFAR-10. We provide many more details about the compression and accuracy trade-off in Table 4 (Appendix D). \n\n(5) In the revised manuscript, we provide a brief description of the affinity propagation (AP) algorithm (Appendix B). As for the why we chose AP instead of classical agglomerative clustering, the reason was that it does not require setting the number of clusters a priori. \n\nWe provide the p values in our paper. We used comparatively large lambda_2 (ranging from 0.05 to 0.1) to identify correlations, and lambda_1 is chosen to balance the sparsity and accuracy trade-off. We will provide more detailed information after we clean up the code and release it. \n\n(6) We briefly discuss weight decay at the end of section 4.1.\n\n(7) Thanks for your suggestion, we already move Fig. 6.d and the corresponding section of the main body of the paper. \n\n[1] U. Oswal, C. Cox, M. Lambon-Ralph, T. Rogers, and R. Nowak. \"Representational similarity learning with application to brain networks\", Proceedings of The 33rd International Conference on Machine Learning, pp. 1041–1049, 2016. \n\n", "(1) We add comparison with group Elastic Net. \n\n(2) We update our results with the averaged ones and the corresponding variances. \n\n" ]
[ -1, 6, -1, 8, -1, 7, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 5, -1, 4, -1, -1, -1, -1, -1 ]
[ "BkfXrD37G", "iclr_2018_rypT3fb0b", "SJk7igVEM", "iclr_2018_rypT3fb0b", "SyXDUD2mG", "iclr_2018_rypT3fb0b", "H1fONf_gG", "rkJfM20eG", "H1fONf_gG", "rkPj2vjeM", "iclr_2018_rypT3fb0b" ]
iclr_2018_S1XolQbRW
Model compression via distillation and quantization
Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.
accepted-poster-papers
The submission proposes a method for quantization. The approach is reasonably straightforward, and is summarized in Algorithm 1. It is the analysis which is more interesting, showing the relationship between quantization and adding Gaussian noise (Appendix B) - motivating quantization as regularization. The submission has a reasonable mix of empirical and theoretical results, motivating a simple-to-implement algorithm. All three reviewers recommended acceptance.
val
[ "HkCg0RFlz", "SkBJ0mdlG", "SkgcJoogf", "ryjGLfTQz", "HJACQSKmG", "ByEhc3uQG", "r1DTHwBXG", "SJG3HPrmM", "BkV5BwSmG", "B1iCfvBQf", "B1lfRZd-G", "SycJC-ubM", "Bk1jTZ_Wz", "HJnHpZ_Wf", "H1BudrVbM", "rkMb3Yjlz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper proposes to combine two approaches to compress deep neural networks - distillation and quantization. The authors proposed two methods, one largely relying on the distillation loss idea then followed by a quantization step, and another one that also learns the location of the quantization points. Somewhat surprisingly, nobody has combined the two approaches before, which makes this paper interesting. Experiments show that both methods work well in compressing large deep neural network models for applications where resources are limited, like on mobile devices. \n\nOverall I am mostly OK with this paper but not impressed by it. Detailed comments below.\n\n1. Quantizing with respect to the distillation loss seems to do better than with the normal loss - this needs more discussion. \n2. The idea of using the gradient with respect to the quantization points to learn them is interesting but not entirely new (see, e.g., \"Matrix Recovery from Quantized and Corrupted Measurements\", ICASSP 2014 and \"OrdRec: An Ordinal Model for Predicting Personalized Item Rating Distributions\", RecSys 2011, although in a different context). I also wonder if it would work better if you can also allow the weights to move a little bit (it seems to me from Algorithm 2 that you only update the quantization points). How about learning them altogether? Also this differentiable quantization method does not really depend on distillation, which is kind of confusing given the title.\n3. I am a little bit confused by how the bits are redistributed in the second method, as in the end it seems to use more than the proposed number of bits shown in the table (as recognized in section 4.2). This makes the comparison a little bit unfair (especially for the CIFAR 100 case, where the \"2 bits\" differentiable quantization is actually using 3.23 bits). This needs more clarification.\n4. The writing can be improved. For example, the concepts of \"teacher\" and \"student\" is not clear at all in the abstract - consider putting the first sentence of Section 3 in there instead. Also, the first sentence of the paper reads as \"... have showed tremendous performance\", which is not proper English. At the top of page 3 I found \"we will restrict our attention to uniform and non-uniform quantization\". What are you not restricting to, then?\n\nSlightly increased my rating after reading the rebuttal and the revision. ", "This paper proposes to learn small and low-cost models by combining distillation and quantization. Two strategies are proposed and the ideas are reasonable and clearly introduced. Experiments on various datasets are conducted to show the effectiveness of the proposed method.\n\nPros:\n(1) The paper is well written, the review of distillation and quantization is clear.\n(2) Extensive experiments on vision and neural machine translation are conducted.\n(3) Detailed discussions about implementations are provided.\n\nCons:\n(1) The differentiable quantization strategy seems not to be consistently better than the straightforward quantized distillation which may need more research.\n(2) The actual speedup is not clearly calculated. The authors claim that the inference times of 2xResNet18 and ResNet18 are similar which seems to be unreasonable. And it seems to need a lot of more work to make the idea really practical.\n\nFinally, I am curious whether the idea will work on object detection task.\n", "This paper presents a framework of using the teacher model to help the compression for the deep learning model in the context of model compression. It proposed both the quantized distillation and also the differentiable quantization. The quantized distillation method just simply adapt the distillation work for the task of model compression, and give good results to the baseline method. While the differentiable quantization optimise the quantization function in a unified back-propagation framework. It is interesting to see the performance improvements by using the one-step optimisation method. \n\nI like this paper very much as it is in good motivation to utilize the distillation framework for the task of model compression. The starting point is quite interesting and reasonable. The information from the teacher network is useful for constructing a better compressed model. I believe this idea is quite similar to the idea of Learning using Privileged Information, in which the information on teacher model is only used during training, but is not utilised during testing. \n\nSome minor comments:\nIn table 3, it seems that the results for 2 bits are not stable, and are there any explanations?\nWhat will be the results if the student model performs the same with the teacher model (e.g., use the teacher model as the student model to do the compression) or even better (reverse the settings)?\nWhat will be the prediction speed for each of models? We can also get the time of speedup for the compressed model.\n\nIt will be better if the authors could discuss the connections between distillation and the recent work for the Learning using Privileged Information setting:\nVladimir Vapnik, Rauf Izmailov:\nLearning using privileged information: similarity control and knowledge transfer. Journal of Machine Learning Research 16: 2023-2049 (2015)\nXinxing Xu, Joey Tianyi Zhou, IvorW. Tsang, Zheng Qin, Rick Siow Mong Goh, Yong Liu: Simple and Efficient Learning using Privileged Information. BeyondLabeler: Human is More Than a Labeler, Workshop of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16). New York City, USA. July, 2016.\n", "We have performed a second revision of the above, further refining the presentation, and including an additional ImageNet experiment, in which we distil onto a wide 4-bit ResNet34 model from ResNet50. \nThe results confirm our earlier findings for ResNet18: we are able to recover the accuracy of the teacher (50 layers, full precision, 76.13% top-1 accuracy) into a shallower quantized student (34 layers, 4-bit weights, 76.07% top-1 accuracy), using quantized distillation. The results are state-of-the-art for this combination of parameters, and surpass the accuracy of the standard full-precision ResNet34 (73.6% top-1 accuracy). ", "Hi, \n\nsure! I have added you to the github repo :) Note that in the paper we report experiments with LSTM for German-English translation, on the openNMT integration test dataset and WMT13. ", "I am interested into reproducing your results, and applying it for LSTM and GRU\nmy github is https://github.com/RadZaeem", "Thank you for your patience. Please see the reply above and the new revision of the paper.", "Thank you for your patience. Please see the reply above and the new revision of the paper.", "Thank you for your patience. Please see the reply above and the new revision of the paper.", "\nWe have posted a revision covering the reviewers comments. We detail the main changes below:\n\n- Larger-scale experiments:\n\nTo address the comments by Reviewers 1 and 3, we have added experiments on the ImageNet-1K and WMT13 datasets, as well as extended the CIFAR experiments.\n\nOn ImageNet, we trained ResNet-18 at 4bits of precision, distilling from a ResNet-34 teacher. \nDirect quantized disillation produces a 4-bit ResNet18 model which loses around 4% in terms of top-1 accuracy w.r.t. the 32-bit ResNet-18 baseline, which is in line with the performance of existing techniques. To improve upon this, we investigated widening the convolutional layers of the student (a technique known to increase capacity during distillation). This leads the student to match and surpass the accuracy of the baseline. \nIn the revised paper, we detail a new experiment where we obtain a 4-bit quantized ResNet-18 student matching the accuracy of the ResNet-34 teacher, and improving upon the accuracy of the 32-bit ResNet-18 of the same depth by >3%. The resulting model is < 1/4th the size of ResNet-34, has virtually the same accuracy, and, thanks to decreased depth, it is >50% faster in terms of inference speed. \nSimilarly, on WMT13, we present a 4bit model which matches the accuracy of the 32-bit teacher in terms of both BLEU score and perplexity.\nIn addition, on CIFAR-10 and CIFAR-100, we derive quantized variants of state-of-the-art models (wide ResNets) which show almost no loss in accuracy. See section 6 for details. \n\nIn sum, we believe that these results validate our techniques in settings that are close to the state-of-the-art, and hope that they address the reviewers’ questions. \n\n\n- Related work:\n\nWe have significantly expanded the related work section, to clarify the relation to the papers which the reviewers suggested, and to the “Binarized Neural Networks on the ImageNet Classification Task” paper. In particular, we have clarified the fact that distillation for size reduction has been suggested before (even since the paper by Hinton et al.). Our contribution is in significantly refining these techniques to (almost) eliminate all accuracy loss in the context of image classification and machine translation using DNNs. Please see the related work section in the introduction for details. \n\n- Clarifications in the presentation:\n\nWe have performed an in-depth revision to clarify the reviewers’ various questions, along the lines of our initial rebuttal. The reviewers can view individual changes via the PDF diff. We have also added a few smaller experiments to clarify various reviewer questions, such as training a student model deeper than the teacher (see Section 6). We also tried alternating QD with DQ, but experiments suggested this method to produce inferior results, so we did not investigate it further. \n\nFinally, we would like to apologize for the delayed revision. This was due to technical issues in the default training setup for ResNet on the training framework we employed, as well as to the fact that ImageNet experiments took longer than expected. ", "We acknowledge the reviewer’s comments regarding experiments on larger datasets and more accurate baseline models. We chose to run on CIFAR-10 and OpenNMT since they have reasonable iteration times. This allows us to carefully study the limits of the methods, and the trade-offs between bit width, network depth, and network layer width, given a fixed model by performing several experiments at each data point in reasonable time. To address the reviewer’s point, we are extending our experiments to training \n1) quantized ResNet students, e.g. ResNet18, from larger ResNet teachers, e.g. ResNet50, on ImageNet, comparing against the full-precision and existing quantized state-of-the-art baselines. \n2) quantized state-of-the-art students for CIFAR-10 and CIFAR-100 tasks from the full-precision baselines, and comparing them against best performing published quantized versions. \n\nIt would be very helpful if the reviewer could be more precise with respect to what they consider as a good baseline. \n\nRegarding the performance of differentiable quantization (DQ) with respect to quantized distillation (QD), we point out that \n1) in constrained settings (e.g. 2bit quantization) DQ can be significantly more accurate than QD. See for example the CIFAR-100 2-bit experiment. We will add more examples here. \n2) DQ usually converges earlier than QD to similar accuracy, which can be useful in settings where reducing number of iterations is important. ", "1. The importance of distillation loss:\nAcross all our experiments, using distillation loss in quantizing was superior to using ‘normal’ loss. This is one of our main findings, and is extremely consistent across experiments. We have given two sets of experiments to illustrate this, but we will add more examples. \n2. Differentiation w.r.t. quantization points:\nWe will discuss the connection with the RecSys 2011 and ICASSP papers in the next revision. \nAlternating optimization w.r.t. weights and locations of quantization points is a neat idea, which we are experimenting with; we will present results on it in the next revision. \nDistillation is actually present in Differentiable Quantization (DQ), since we are starting from a distilled full-precision model. It is true however that DQ can be applied independently from distillation. We will clarify this point. \n3. Re-distribution of bits: \nFractional numbers appear since there are a couple of different techniques being concurrently: 1) we preferentially re-distribute bits proportionally to the gradient norm; 2) we perform Huffman encoding to compute the “optimal” resulting compression score per layer; 3) we do bucketing, which slightly increases the bit cost due to storing the scaling factor. \nWe will add a detailed procedure on how these costs were obtained, and explain why they can exceed the baseline bit width. \n4. Writing inconsistencies:\nWe thank the reviewer for the detailed comments, which we will fully address in the next version. ", "1. Divergence of 2bit variants: \nIndeed, the 2bit version can diverge for some parameter settings. Our interpretation is that through trimming and quantization we reduce the capacity of the student, which might no longer have enough capacity to mimic the teacher model, and diverges. \n2. Swapping student and teacher model: \nThat is an interesting question. We focused on larger teachers and smaller students for compressing a fixed model. However, it is highly probable that quantizing a larger CNN via distillation from a smaller one will work as well. We will add experiments for this case. \n3. Inference speed is linear in network depth and bit width. In our experiments, the speedup we get on inference is proportional to the reduction in depth. This is mentioned in passing in the Conclusions section, but we will add some exact numbers. \n4. Connection to “Learning using Privileged Information”: \nThis is an excellent point, we will discuss this connection in the next revision. ", "We would like to thank all the reviewers for their careful consideration of our paper, and for very useful comments. We provide detailed responses below. \nWe are currently running additional experiments to address some of the reviewers’ comments. We plan to produce a complete updated revision as soon as the experiments are done, which should be before December 15th. ", "Sorry for the late reply, we had to check what would be the best way to share the code without breaking anonymity. If you tell me your GitHub username, I can add you to the repository :)", "I was wondering if you guys have an open source code for your experiment along with a the data used for training and validating that we could use to reproduce your results.\n\nMe and my team, would like to revise your research paper for a final class project.\n\nthank you" ]
[ 7, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1XolQbRW", "iclr_2018_S1XolQbRW", "iclr_2018_S1XolQbRW", "B1iCfvBQf", "ByEhc3uQG", "H1BudrVbM", "SkBJ0mdlG", "HkCg0RFlz", "SkgcJoogf", "iclr_2018_S1XolQbRW", "SkBJ0mdlG", "HkCg0RFlz", "SkgcJoogf", "iclr_2018_S1XolQbRW", "rkMb3Yjlz", "iclr_2018_S1XolQbRW" ]
iclr_2018_HyH9lbZAW
Variational Message Passing with Structured Inference Networks
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.
accepted-poster-papers
Thank you for submitting you paper to ICLR. The paper presents a general approach for handling inference in probabilistic graphical models that employ deep neural networks. The framework extends Jonhson et al. (2016) and Khan & Lin (2017). The reviewers are all in agreement that the paper is suitable for publication. The paper is well written and the use of examples to illustrate the applicability of the methods brings great clarity. The experiments are not the strongest suit of the paper and, although the revision has improved this aspect, I would encourage a more comprehensive evaluation of the proposed methods. Nevertheless, this is a strong paper.
train
[ "rkgie2rlf", "B1ytDAtlG", "HJGBwE9gG", "Bkr5LPpXM", "HkX_IvpXz", "BJo4LPamG", "rJKTjyYzG", "HJsmo1tzM", "H1T9cJYMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The authors adapts stochastic natural gradient methods for variational inference with structured inference networks. The variational approximation proposed is similar to SVAE by Jonhson et al. (2016), but rather than directly using the global variable theta in the local approximation for x the authors propose to optimize a separate variational parameter. The authors then extends and adapts the natural gradient method by Khan & Lin (2017) to optimize all the variational parameters. In the experiments the authors generally show improved convergence over SVAE.\n\nThe idea seems promising but it is still a bit unclear to me why removing dependence between global and local parameters that you know is there would lead to a better variational approximation. The main motivation seems to be that it is easier to optimize.\n\n- In the last two sentences of the updates for \\theta_PGM you mention that you need to do SVI/VMP to compute the function \\eta_x\\theta. Might this also suffer from non-convergence issues like you argue SVAE does? Or do you simply mean that computation of this is exact using regular message passing/Kalman filter/forward-backward?\n- It was not clear to me why we should use a Gaussian approximation for the \\theta_NN parameters? The prior might be Gaussian but the posterior is not? Is this more of a simplifying assumption?\n- There has recently been interest in using inference networks as part of more flexible variational approximations for structured models. Some examples of related work missing in this area is \"Variational Sequential Monte Carlo\" by Naesseth et al. (2017) / \"Filtering Variational Objectives\" by Maddison et al. (2017) / \"Auto-encoding sequential Monte Carlo\" Le et al. (2017).\n- Section 2.1, paragraph nr 5, \"algorihtm\" -> \"algorithm\"\n", "This paper presents a variational inference algorithm for models that contain\ndeep neural network components and probabilistic graphical model (PGM)\ncomponents.\nThe algorithm implements natural-gradient message-passing where the messages\nautomatically reduce to stochastic gradients for the non-conjugate neural\nnetwork components. The authors demonstrate the algorithm on a Gaussian mixture\nmodel and linear dynamical system where they show that the proposed algorithm\noutperforms previous algorithms. Overall, I think that the paper proposes some\ninteresting ideas, however, in its current form I do not think that the novelty\nof the contributions are clearly presented and that they are not thoroughly\nevaluated in the experiments.\n\nThe authors propose a new variational inference algorithm that handles models\nwith deep neural networks and PGM components. However, it appears that the\nauthors rely heavily on the work of (Khan & Lin, 2017) that actually provides\nthe algorithm. As far as I can tell this paper fits inference networks into\nthe algorithm proposed in (Khan & Lin, 2017) which boils down to i) using an\ninference network to generate potentials for a conditionally-conjugate\ndistribution and ii) introducing new PGM parameters to decouple the inference\nnetwork from the model parameters. These ideas are a clever solution to work\ninference networks into the message-passing algorithm of (Khan & Lin, 2017),\nbut I think the authors may be overselling these ideas as a brand new algorithm.\nI think if the authors sold the paper as an alternative to (Johnson, et al., 2016)\nthat doesn't suffer from the implicit gradient problem the paper would fit into\nthe existing literature better.\n\nAnother concern that I have is that there are a lot of conditiona-conjugacy\nassumptions baked into the algorithm that the authors only mention at the end\nof the presentation of their algorithm. Additionally, the authors briefly state\nthat they can handle non-conjugate distributions in the model by just using\nconjugate distributions in the variational approximation. Though one could do\nthis, the authors do not adequately show that one should, or that one can do this\nwithout suffering a lot of error in the posterior approximation. I think that\nwithout an experiment the small section on non-conjugacy should be removed.\n\nFinally, I found the experimental evaluation to not thoroughly demonstrate the\nadvantages and disadvantages of the proposed algorithm. The algorithm was applied\nto the two models originally considered in (Johnson, et al., 2016) and the\nproposed algorithm was shown to attain lower mean-square errors for the two\nmodels. The experiments do not however demonstrate why the algorithm is\nperforming better. For instance, is the (Johnson, et al., 2016) algorithm\nsuffering from the implicit gradient? It also would have been great to have\nconsidered a model that the (Johnson, et. al., 2016) algorithm would not work\nwell on or could not be applied to show the added applicability of the proposed\nalgorithm.\n\nI also have some minor comments on the paper:\n- There are a lot of typos.\n- The first two sentences of the abstract do not really contribute anything\n to the paper. What is a powerful model? What is a powerful algorithm?\n- DNN was used in Section 2 without being defined.\n- Using p() as an approximate distribution in Section 3 is confusing notation\n because p() was used for the distributions in the model.\n- How is the covariance matrix parameterized that the inference network produces?\n- The phrases \"first term of the inference network\" are not clear. Just use The\n DNN term and the PGM term of the inference networks, and better still throw\n in a reference to Eq. (4).\n- The term \"deterministic parameters\" was used and never introduced.\n- At the bottom of page 5 the extension to the non-conjugate case should be\n presented somewhere (probably the appendix) since the fact that you can do\n this is a part of your algorithm that's important.\n", "The paper seems to be significant since it integrates PGM inference with deep models. Specifically, the idea is to use the structure of the PGM to perform efficient inference. A variational message passing approach is developed which performs natural-gradient updates for the PGM part and stochastic gradient updates for the deep model part. Performance comparison is performed with an existing approach that does not utilize the PGM structure for inference.\nThe paper does a good job of explaining the challenges of inference, and provides a systematic approach to integrating PGMs with deep model updates. As compared to the existing approach where the PGM parameters must converge before updating the DNN parameters, the proposed architecture does not require this, due to the re-parameterization which is an important contribution.\n\nThe motivation of the paper, and the description of its contribution as compared to existing methods can be improved. One of the main aspects it seems is generality, but the encodings are specific to 2 types PGMs. Can this be generalized to arbitrary PGM structures? How about cases when computing Z is intractable? Could the proposed approach be adapted to such cases. I was not very sure as to why the proposed method is more general than existing approaches.\n\nRegarding the experiments, as mentioned in the paper the evaluation is performed on two fairly small scale datasets. the approach shows that the proposed methods converge faster than existing methods. However, I think there is value in the approach, and the connection between variational methods with DNNs is interesting.", "We have made the following changes in the revised version:\n - Introduction is modified to show that our method is an improvement over Johnson et. al.’s method, and it builds upon Khan and Lin’s method.\n - Section 2 modified to clearly show the issues with Johnson et.al.’s method.\n - Section 3 modified to clarify the conjugacy requirements of inference network. We have added many illustrative examples.\n - Section 4 modified to simplify the algorithm description. We have added a pseudo-code. \n - In Section 5 we added a new result on Student’s-t mixture model.", "We have made the following changes in the revised version:\n - Introduction is modified to show that our method is an improvement over Johnson et. al.’s method, and it builds upon Khan and Lin’s method.\n - Section 2 modified to clearly show the issues with Johnson et.al.’s method.\n - Section 3 modified to clarify the conjugacy requirements of inference network. We have added many illustrative examples.\n - Section 4 modified to simplify the algorithm description. We have added a pseudo-code. \n - In Section 5 we added a new result on Student’s-t mixture model.", "We have made the following changes in the revised version:\n- Introduction is modified to show that our method is an improvement over Johnson et. al.’s method, and it builds upon Khan and Lin’s method.\n- Section 2 modified to clearly show the issues with Johnson et.al.’s method.\n- Section 3 modified to clarify the conjugacy requirements of inference network. We have added many illustrative examples.\n- Section 4 modified to simplify the algorithm description. We have added a pseudo-code. \n- In Section 5 we added a new result on Student’s-t mixture model.", "Thanks for your review. Following reviewers suggestions, we will make the following major changes in our next draft:\n- We will clearly explain that our main motivation is to improve over the method of Johnson et.al., 2016.\n- We will clarify the description, especially the conjugacy requirements, and add detailed discussion on the applicability and limitations of our approach.\n- We will clean-up the description of our algorithm in Section 4.\n- We will add an experiment on non-conjugate mixture model to demonstrate the generality of our approach. We will add a larger experiment for clustering MNIST digits.\n--------------\nDETAILED COMMENTS\nReviewer: “Might this also suffer from non-convergence issues like you argue SVAE does? Or do you simply mean that computation of this is exact using regular message passing/Kalman filter/forward-backward?”\n\nResponse: Our method does not have convergence issues, and is guaranteed to converge under mild conditions discussed in Khan and Lin 2017. And yes, the updates are exact and obtained using regular message passing.\n--------------\nReviewer: “It was not clear to me why we should use a Gaussian approximation for the \\theta_NN parameters? “\n\nResponse: You are right. We can use any appropriate exponential family approximation. The updates are similar to Khan and Lin’s method. We will change this in the final draft.\n--------------\nThanks for the citations. We will add them in the paper\n", "Thanks for the review. Following reviewers suggestions, we will make the following major changes in our next draft:\n- We will clearly explain that our main motivation is to improve over the method of Johnson et.al., 2016.\n- We will clarify the description, especially the conjugacy requirements, and add detailed discussion on the applicability and limitations of our approach.\n- We will clean-up the description of our algorithm in Section 4.\n- We will add an experiment on non-conjugate mixture model to demonstrate the generality of our approach. We will add a larger experiment for clustering MNIST digits.\n-----------\nDETAILED COMMENTS\nReviewer: “However, it appears that the authors rely heavily on the work of (Khan & Lin, 2017) that actually provides the algorithm. [...] I think the authors may be overselling these ideas as a brand new algorithm.”.\n\nResponse: Thanks for letting us know. This was not our intention. We will modify the write-up to clarify our contributions over Khan and Lin, 2017 and not oversell our method.\n-----------\nReviewer: “I think if the authors sold the paper as an alternative to (Johnson, et al., 2016) that doesn't suffer from the implicit gradient problem the paper would fit into the existing literature better.”\n\nResponse: That’s a good point and we will write about this a bit more clearly in the paper. Our method is not just an alternative over Jonson et. al. 2016, but it is a generalization of their method. We propose a Variational Message Passing framework for complex models which are not covered by the method of Johnson et. al. 2016. We will modify the introduction and discussion to clarify these points. We will also add an experiment on a non-conjugate model as an evidence of the generality of our approach.\n-----------\nReviewer: “Another concern that I have is that there are a lot of conditional-conjugacy assumptions baked into the algorithm that the authors only mention at the end of the presentation of their algorithm.”\n\nResponse: We agree and we will clarify the text to reflect the following point: Our method works for general non-conjugate models, but our inference network is restricted to a conjugate model where the normalizing constant is easy to compute. \n-----------\nReviewer: “The authors briefly state that they can handle non-conjugate distributions in the model [...]. Though one could do this, the authors do not adequately show that one should, or that one can do this without suffering a lot of error in the posterior approximation.”\n\nResponse: We will modify the text to clarify this. We will also add an example of a non-conjugate mixture model and show how to design a conjugate inference network for this problem. We will also add a paragraph explaining how to generalize this procedure to general graphical models.\n-----------\nReviewer: “the experimental evaluation do not thoroughly demonstrate the advantages and disadvantages of the proposed algorithm…. The experiments do not however demonstrate why the algorithm is performing better.”\n\nResponse: Thanks for pointing this out. The goal of our experiments was to show that, when PGM is a conjugate model, our method performs similar to the method of Johnson et. al. The advantage of our approach is the simplicity of our method, as well as its generality. This is mentioned in the first paragraph in Section 5. \n-----------\nReviewer: “is the (Johnson, et al., 2016) algorithm suffering from the implicit gradient?”\n\nResponse: On small datasets, we did not observe the implicit gradient issue with the method of Johnson et. al. But in principle we expect this to be a problem for complex models.\n-----------\nReviewer: “It also would have been great to have considered a model that the (Johnson, et. al., 2016) algorithm would not work well on or could not be applied to show the added applicability of the proposed algorithm.”\n\nResponse: Thanks for the suggestion. We will add an experiment for non-conjugate mixture model, where the method of Johnson et. al. does not apply.\n-----------\nThanks for further suggestions. We will modify the abstract to remove the line about “powerful models and algorithms”. We will take other comments into account as well. Thanks!\n", "Thanks for your review. Following reviewers suggestions, we will make the following major changes in our next draft:\n- We will clearly explain that our main motivation is to improve over the method of Johnson et.al., 2016.\n- We will clarify the description, especially the conjugacy requirements, and add detailed discussion on the applicability and limitations of our approach.\n- We will clean-up the description of our algorithm in Section 4.\n- We will add an experiment on non-conjugate mixture model to demonstrate the generality of our approach. We will add a larger experiment for clustering MNIST digits.\n----------\nDETAILED COMMENTS\nReviewer: “One of the main aspects it seems is generality, but the encodings are specific to 2 types PGMs. Can this be generalized to arbitrary PGM structures? How about cases when computing Z is intractable?”\n\nResponse: We agree that the write-up is not clear. We will improve this. Our method can handle arbitrary PGM structure in the model similar to the method of Khan and Lin 2017. The inference network however is restricted to cases where Z is tractable. Our method therefore simplifies inference by choosing an inference network which has a simpler form than the original model.\n---------\nReviewer: “Regarding the experiments, as mentioned in the paper the evaluation is performed on two fairly small scale datasets.”\n\nResponse: We agree with your point. Our comparisons are restricted because the existing implementation of SVAE baseline did not scale to large problems. We will add two more experiments as promised above.\n" ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyH9lbZAW", "iclr_2018_HyH9lbZAW", "iclr_2018_HyH9lbZAW", "rkgie2rlf", "B1ytDAtlG", "HJGBwE9gG", "rkgie2rlf", "B1ytDAtlG", "HJGBwE9gG" ]
iclr_2018_H1mCp-ZRZ
Action-dependent Control Variates for Policy Optimization via Stein Identity
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from the large variance issue on policy gradient estimation, which leads to poor sample efficiency during training. In this work, we propose a control variate method to effectively reduce variance for policy gradient methods. Motivated by the Stein’s identity, our method extends the previous control variate methods used in REINFORCE and advantage actor-critic by introducing more flexible and general action-dependent baseline functions. Empirical studies show that our method essentially improves the sample efficiency of the state-of-the-art policy gradient approaches.
accepted-poster-papers
Thank you for submitting you paper to ICLR. The reviewers agree that the paper’s development of action-dependent baselines for reducing variance in policy gradient is a strong contribution and that the use of Stein's identity to provide a principled way to think about control variates is sensible. The revision clarified an number of the reviewers’ questions and the resulting paper is suitable for publication in ICLR.
train
[ "rJLc6CN-z", "Hk7S_RLEG", "B1Dl__BEf", "SkRWcmOgz", "ryP7s5Oxz", "By16DK9xM", "S1zJPApmG", "rJjc58tXz", "BJ2OqUF7f", "SJNDc8tQG", "H13WKlSWG", "Hyq5QKWWf", "H1Fan9z1G", "S17GnS-1G", "H1siIPk1M" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "public", "author", "public" ]
[ "Hi, thanks for your interest, code has released here: https://github.com/DartML/PPO-Stein-Control-Variate.\n\nWe plan to share the videos of learned policies.", "After several exchanges with the authors, we have been unable to replicate the results produced in Figure 1 that show the improvement of an action-dependent control variate. As the authors note, several bugs have affected Figure 1. Using the latest code provided by the authors, we do not find a reduction in variance with a state-action control variate compared to a state-only control variate.", "Good to see you included additional discussions. Note that if the second term is estimated zero-variance per state (e.g. sampling many actions instead of using single-sample reparameterized gradient, or pick a control variate that you can integrate directly under the policy), the optimal control variate is Q^\\pi, which can be learned using any policy evaluation technique, on-policy or off-policy; it's discussed in Q-prop as well.\n\nRe: using the same samples to fit the control variate (many gradient updates) and apply to themselves. This could introduce non-trivial bias. It's easy to imagine that in such case, the first term can go to zero, because assuming finite, diverse enough samples, Q can potentially fit all sample returns. In such cases, it's important not only to report variance as you have estimated, but also bias. It's highly encouraged to discuss/include a few more details on this in the final paper.\n\n", "In this work, the authors suggest the use of control variate schemes for estimating gradient values, within a reinforcement learning framework. The authors also introduce a specific control variate technique based on the so-called Stein’s identity. The paper is interesting and well-written.\n\nI have some question and some consideration that can be useful for improving the appealing of the paper.\n\n- I believe that different Monte Carlo (or Quasi-Monte Carlo) strategies can be applied in order to estimate the integral (expected value) in Eq. (1), as also suggested in this work. Are there other alternatives in the literature? Please, please discuss and cite some papers if required. \n\n- I suggest to divide Section 3.1 in two subsections. The first one introducing Stein’s identity and the related comments that you need, and a second one, starting after Theorem 3.1, with title “Stein Control Variate”.\n\n- Please also discuss the relationships, connections, and possible applications of your technique to other algorithms used in Bayesian optimization, active learning and/or sequential learning, for instance as\n\nM. U. Gutmann and J. Corander, “Bayesian optimization for likelihood-free inference of simulator-based statistical mod- els,” Journal of Machine Learning Research, vol. 16, pp. 4256– 4302, 2015. \n\nG. da Silva Ferreira and D. Gamerman, “Optimal design in geostatistics under preferential sampling,” Bayesian Analysis, vol. 10, no. 3, pp. 711–735, 2015. \n\nL. Martino, J. Vicent, G. Camps-Valls, \"Automatic Emulator and Optimized Look-up Table Generation for Radiative Transfer Models\", IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017.\n\n- Please also discuss the dependence of your algorithm with respect to the starting baseline function \\phi_0.", "This paper proposed a class of control variate methods based on Stein's identity. Stein's identity has been widely used in classical statistics and recently in statistical machine learning literature. Nevertheless, applying Stein's identity to estimating policy gradient is a novel approach in reinforcement learning community. To me, this approach is the right way of constructing control variates for estimating policy gradient. The authors also did a good job in connecting with existing works and gave concrete examples for Gaussian policies. The experimental results also look promising.\n\nIt would be nice to include some theoretical analyses like under what conditions, the proposed method can achieve smaller sample complexity than existing works. \n\nOverall this is a strong paper and I recommend to accept.\n \n\n\n", "The paper proposes action-dependent baselines for reducing variance in policy gradient, through the derivation based on Stein’s identity and control functionals. The method relates closely to prior work on action-dependent baselines, but explores in particular on-policy fitting and a few other design choices that empirically improve the performance. \n\nA criticism of the paper is that it does not require Stein’s identity/control functionals literature to derive Eq. 8, since it can be derived similarly to linear control variate and it has also previously been discussed in IPG [Gu et. al., 2017] as reparameterizable control variate. The derivation through Stein’s identity does not seem to provide additional insights/algorithm designs beyond direct derivation through reparameterization trick.\n\nThe empirical results appear promising, and in particular in comparison with Q-Prop, which fits Q-function using off-policy TD learning. However, the discussion on the causes of the difference should be elaborated much more, as it appears there are substantial differences besides on-policy/off-policy fitting of the Q, such as:\n\n-FitLinear fits linear Q (through parameterization based on linearization of Q) using on-policy learning, rather than fitting nonlinear Q and then at application time linearize around the mean action. A closer comparison would be to use same locally linear Q function for off-policy learning in Q-Prop.\n\n-The use of on-policy fitted value baseline within Q-function parameterization during on-policy fitting is nice. Similar comparison should be done with off-policy fitting in Q-Prop.\n\nI wonder if on-policy fitting of Q can be elaborated more. Specifically, on-policy fitting of V seems to require a few design details to have best performance [GAE, Schulman et. al., 2016]: fitting on previous batch instead of current batch to avoid overfitting (this is expected for your method as well, since by fitting to current batch the control variate then depends nontrivially on samples that are being applied), and possible use of trust-region regularization to prevent V from changing too much across iterations. \n\nThe paper presents promising results with direct on-policy fitting of action-dependent baseline, which is promising since it does not require long training iterations as in off-policy fitting in Q-Prop. As discussed above, it is encouraged to elaborate other potential causes that led to performance differences. The experimental results are presented well for a range of Mujoco tasks. \n\nPros:\n\n-Simple, effective method that appears readily available to be incorporated to any on-policy PG methods without significantly increase in computational time\n\n-Good empirical evaluation\n\nCons:\n\n-The name Stein control variate seems misleading since the algorithm/method does not rely on derivation through Stein’s identity etc. and does not inherit novel insights due to this derivation.\n", "Dear Reviewers, \n\nWe just submitted a modification of the paper. The main changes are\n\n1. Modified the title.\n\n2. We split Section 3.1 following the suggestion of AnonReviewer1. \n\n3. We cited and discussed IPG. \n\n4. The original code that generates figure 1 had a problem when calculating the variance of the gradient estimator. We fixed it and updated figure 1. \n\n5. In our policy optimization method, we estimate phi based on the data from the current iteration. This introduces a (typically negotiable) bias because the data were used for twice. A way to avoid this is to estimate phi based on data from the previous iterations. We studied the effect of this bias and empirically find that using the previous data, although eliminates this bias, does not seem to improve the performance (see Appendix 7.3 and more discussion), and our current version tends to perform better. We clarified this point in the text as well.\n\nWe will further improve the paper based on the reviewers' comments.", "Thank you very much for the thoughtful feedbacks, with which we could further improve our paper.\n\n* Different Mote Carlo strategies for estimating the integral? Because the setting here is model-free, that is, we only have a black-box to simulate from the environment, without knowing the underlying distribution, there more limited MC strategies can be used than typical integration problems. Nevertheless, some advanced techniques such as Bayesian quadrature can be used (Ghavamzadeh et al. Bayesian Policy Gradient and Actor-Critic Algorithms). \n\n* We will modify Section 3.1 according to your suggestion. \n\n* It would be very interesting to consider the application of this technique to Bayesian optimization. We will certainly discuss the possibility in the future work section. ", "Thank you very much for the review. We are interested in studying theoretical properties of the estimators as well, but because of the non-convex nature of RL problems, it may be better to start theoretical analysis in simpler cases such as convex problems, in which some interesting results on convergence rate can be potentially be obtained (perhaps in connection to stochastic variance reduced gradient in some way). ", "Thank you very much for the review and pointing out potential improvements. The followings are the response to your comments:\n\n* Thanks for pointing out IPG and on-policy vs. off-policy fitting; we will provide a thorough discussion on this. We have been mainly focussing on fitting \\phi with on-policy, because the optimal control variates should theoretically depend on the current policy and hence \"on-policy\" in its nature. However, we did experiment ways to use additional off-policy data to our update and find that using additional off-policy data can in fact further improve our method. We find it is hard to have a fair comparison between on policy vs. off-policy fitting because it largely depends on how we implement each of them. Instead, an interesting future direction for us is to investigate principled ways to combine them to improve beyond what we can achieve now. \n\nWe should point out the difference between IPG and our method is not only the way we fit \\phi, another perhaps more significant difference is that IPG (depending which particular version) also averages over off-policy data when estimating the gradient, while our method always only averages over the on-policy data. \n\n* In our comparison, Q-prop also uses an on-policy fitted value function inside the Q-function. \n\n* Thank you very much for suggesting better ways of on-policy fitting of V. We are interested in testing them for future works. Currently, V is fitted by all the current data which theoretically introduces a (possibly small) bias because the current data is used twice in the gradient estimator, so using the data from the previous iteration may yield improvement.\n\n\n* Regarding the name, although it turned out our result can be derived using reparameterization trick, Stein's identity is what motivated this work originally, and we lean towards keeping it as the motivation since Stein's identity generally provides a principled way to think about control variates (which essentially requires zero-expectation identities mathematically). \n\nStein's identity and reparameterization trick are two orthogonal ways to think about this work, and it is useful to keep both of them to give a more comprehensive view. It is not true that Stein's identity is not directly useful in our work: By using (the original) Stein's identity on the top of the basic formula, we can derive a different control variate for Gaussian policy that has lower variance (and it is what we used in experiments). It is possible that we can further generalize the result by using Stein's identity in more creative ways. On the other hand, we will emphasize more the role of reparameterization trick in the revision. ", "Thanks a lot!", "Great results and very interesting paper!\n\nDo you plan to share video some of the learnt policies? And do you plan to share a code later?", "So, just to emphasise the similarity, and note that since these are parallel submissions by no means I intend to diminish your contributions, just I think the connection is interesting. Equation (8) from your paper is exactly equivalent to Equation (6) (LAX estimator) of the paper, specifically by setting:\npi(a|theta) = p(b|theta)\nQ(s,a) = f(b)\nphi(s,a) = c_phi(b)\nf(s, eps|theta) = T(eps, theta)\nSimilar to your remark in Equation (13) the other authors just below their Equation (6) mention that taking the \"control\" function equal the original one (if that is differentiable) we recover the path gradient. Additionally, they also suggest optimizing the \"control\" function by minimizing the variance.\n\nRegarding, equation (18) and the Gaussian policy indeed it is an interesting observation that we can apply this a second time and get a potentially lower variance estimator. This in fact I think is a more general result that if the derivatives depend on epsilon you can reapply the procedure, but don't cite me on that. Potentially, investigating/generalizing that might be interesting. ", "Thank you for pointing us to this independent ICLR submission. It is highly relevant. Their estimator in the RL setting (their Eq 11) is mathematically equivalent to ours. However, our paper is derived from a different perspective and give more comprehensive results on reinforcement learning specifically. \n\n1) Our work focuses on RL. By combining with PPO and TRPO, we obtain significant improvement on challenging RL tasks such as Humanoid-v1 and HumanoidStandup-v1. We also proposed and tested different architectures and optimization methods for the control variates, providing guidance on what may work best in practice. We explicitly establish the connection with Q-prop(Gu et al., 2016b), which can be viewed as our method with linear control variates. \n\n2) Our work was motivated by Stein’s identity and control functionals (Oates et al. 2017), and hence develops a connection between Stein’s identity and reparameterization trick which can be itself useful. For example, for Gaussian policy, we further derive a different update rule with lower variance by utilizing Stein’s Identity twice.", "I'm wondering if in fact what is suggested as Stein Control Varite is not indeed similar (if not the same) with the technique proposed here: https://arxiv.org/abs/1711.00123 ? " ]
[ -1, -1, -1, 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Hyq5QKWWf", "iclr_2018_H1mCp-ZRZ", "SJNDc8tQG", "iclr_2018_H1mCp-ZRZ", "iclr_2018_H1mCp-ZRZ", "iclr_2018_H1mCp-ZRZ", "iclr_2018_H1mCp-ZRZ", "SkRWcmOgz", "ryP7s5Oxz", "By16DK9xM", "rJLc6CN-z", "iclr_2018_H1mCp-ZRZ", "S17GnS-1G", "H1siIPk1M", "iclr_2018_H1mCp-ZRZ" ]
iclr_2018_rkcQFMZRb
Variational image compression with a scale hyperprior
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate--distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics.
accepted-poster-papers
Thank you for submitting you paper to ICLR. The reviewers and authors have engaged well and the revision has improved the paper. The reviewers are all in agreement that the paper substantially expands the prior work in this area, e.g. by Balle et al. (2016, 2017), and is therefore suitable for publication. Although I understand that the authors have not optimised their compression method for runtime yet, a comment about this prospect in the main text would be a sensible addition.
train
[ "B1i1F5uxz", "SkZGkFFxG", "ryY3n25gG", "SkPhJjZQz", "r1NAe9bXM", "B1KUgq-mM", "S1s7e5WQG", "Sk_Ge9bmf", "r1361c-mf", "ryf4J5ZXf", "Syqz19WQG", "rJqPCtZ7z", "Syy1kl8ez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "Summary:\n\nThis paper extends the work of Balle et al. (2016, 2017) on using certain types of variational autoencoders for image compression. After encoding pixels with a convolutional net with GDN nonlinearities, the quantized coefficients are entropy encoded. Where before the coefficients were independently encoded, the coefficients are now jointly modeled using a latent variable model. In particular, the model exploits dependencies in the scale of neighboring coefficients. The additional latent variables are used to efficiently represent these scales. Both the coefficients and the representation of the scales are quantized and encoded in the binary image representation.\n\nReview:\n\nLossy image compression using neural networks is a rapidly advancing field and of considerable interest to the ICLR community. I like the approach of using a hierarchical entropy model, which may inspire further work in this direction. It is nice to see that the variational approach may be able to outperform the more complicated state-of-the-art approach of Rippel and Bourdev (2017). That said, the evaluation is in terms of MS-SSIM and only the network directly optimized for MS-SSIM outperformed the adversarial approach of R&B. Since the reconstructions generated by a network optimized for MSE tend to look better than those of the MS-SSIM network (Figure 6), I am wondering if the proposed approach is indeed outperforming R&B or just exploiting a weakness of MS-SSIM. It would have been great if the authors included a comparison based on human judgments or at least a side-by-side comparison of reconstructions generated by the two approaches.\n\nIt might be interesting to relate the entropy model used here to other work involving scale mixtures, e.g. the field of Gaussian scale mixtures (Lyu & Simoncelli, 2007).\n\nAnother interesting comparison might be to other compression approaches where scale mixtures were used and pixels were encoded together with scales (e.g., van den Oord & Schrauwen, 2017).\n\nThe authors combine their approach using MS-SSIM as distortion. Is this technically still a VAE? Might be worth discussing.\n\nI did not quite follow the motivation for convolving the prior distributions with a uniform distribution.\n\nThe paper is mostly well written and clear. Minor suggestions:\n\n– On page 3 the paper talks about “the true posterior” of a model which hasn’t been defined yet. Although most readers will not stumble here as they will be familiar with VAEs, perhaps mention first that the generative model is defined over both $x$ and $\\tilde y$.\n\n– Below Equation 2 it sounds like the authors claim that the entropy of the uniform distribution is zero independent of its width.\n\n– Equation 7 is missing some $\\tilde z$.\n\n– The operational diagram in Figure 3 is missing a “|”.", "\nAuthors propose a transform coding solution by extending the work in Balle 2016. They define an hyperprior for the entropy coder to model the spatial relation between the transformed coefficients. \n\nThe paper is well written, although I had trouble following some parts. The results of the proposal are state-of-the-art, and there is an extremely exhaustive comparison with many methods.\n\nIn my opinion the work has a good quality to be presented at the ICLR. However, I think it could be excellent if some parts are improved. Below I detail some parts that I think could be improved.\n\n\n*** MAIN ISSUES\n\nI have two main concerns about motivation that are related. The first refers to hyperprior motivation. It is not clear why, if GDN was proposed to eliminate statistical dependencies between pixels in the image, the main motivation is that GDN coefficients are not independent. Perhaps this confusion could be resolved by broadening the explanation in Figure 2. My second concern is that it is not clear why it is better to modify the probability distribution for the entropy encoder than to improve the GDN model. I think this is a very interesting issue, although it may be outside the scope of this work. As far as I know, there is no theoretical solution to find the right balance between the complexity of transformation and the entropy encoder. However, it would be interesting to discuss this as it is the main novelty of the work compared to other methods of image compression based on deep learning. \n\n*** OTHER ISSUES \n\nINTRODUCTION\n\n-\"...because our models are optimized end-to-end, they minimize the total expected code length by learning to balance the amount of side information with the expected improvement of the entropy model.\" \nI think this point is very interesting, it would be good to see some numbers of how this happens for the results presented, and also during the training procedure. For example, a simple comparison of the number of bits in the signal and side information depending on the compression rate or the number of iterations during model training. \n\n\nCOMPRESSION WITH VARIATIONAL MODELS\n\n- There is something missing in the sentence: \"...such as arithmetic coding () and transmitted...\"\n\n- Fig1. To me it is not clear how to read the left hand schemes. Could it be possible to include the distributions specifically? Also it is strange that there is a \\tiled{y} in both schemes but with different conditional dependencies. Another thing is that the symbol ψ appears in this figure and is not used in section 2. \n\n- It would be easier to follow if change the symbols of the functions parameters by something like \\theta_a and \\theta_s.\n\n- \"Distortion is the expected difference between...\" Why is the \"expected\" word used here? \n\n- \"...and substituting additive uniform noise...\" is this phrase correct? Are authors is Balle 2016 substituting additive uniform noise? \n\n- In equation (1), is the first term zero or constant? when talking about equation (7) authors say \"Again, the first term is constant,...\".\n\n- The sentence \"Most previous work assumes...\" sounds strange.\n\n- The example in Fig. 2 is extremely important to understand the motivation behind the hyperprior but I think it needs a little more explanation. This example is so important that it may need to be explained at the beginning of the work. Is this a real example, of a model trained without and with normalization? If so specify it please. Why is GDN not able to eliminate these spatial dependencies? Would these dependencies be eliminated if normalization were applied between spatial coefficients? Could you remove dependencies with more layers or different parameters in the GDN?\n\nINTRODUCTION OF A SCALE HYPERPRIOR\n\n- TYPO \"...from the center pane of...\"\n\n- \"...and propose the following extension of the model (figure 3):\" there is nothing after the colon. Maybe there is something missing, or maybe it should be a dot instead of a colon. However to me there is a lack of explanation about the model. \n\n \nRESULTS\n\n- \"...,the probability mass functions P_ŷi need to be constructed “on the fly”...\"\nHow computationally costly is this? \n\n- \"...batch normalization or learning rate decay were found to have no beneficial effect (this may be due to the local normalization properties of GDN, which contain global normalization as a special case).\"\n\nThis is extremely interesting. I see the connection for batch normalization, but not for decay of the learning rate. Please, clarify it. Does this mean that when using GDN instead of regular nonlinearity we no longer need to use batch normalization? Or in other words, do you think that batch normalization is useful only because it is special case of GSN? It would be useful for the community to assess what are the benefits of local normalization versus global normalization.\n\n- \"...each of these combinations with 8 different values of λ in order to cover a range of rate–distortion tradeoffs.\" \n\nWould it be possible with your methods including \\lambda as an input and the model parameters as side information?\n\n- I guess you included the side information when computing the total entropy (or number of bits), was there a different way of compressing the image and the side information?\n\n- Using the same metrics to train and to evaluate is a little bit misleading. Evaluation plots using a different perceptual metric would be helpful. \n\n-\"Since MS-SSIM yields values between 0 (worst) and 1 (best), and most of the compared methods achieve values well above 0.9, we converted the quantity to decibels in order to improve legibility.\" \nAre differences of MS-SSIM with this conversion significant? Is this transformation necessary, I lose the intuition. Besides, probably is my fault but I have not being able to \"unconvert\" the dB to MS-SSIM units, for instance 20*log10(1)= 20 but most curves surpass this value. \n\n- \"..., results differ substantially depending on which distortion metric is used in the loss function during training.\" \nIt would be informative to understand how the parameters change depending on the metric employed, or at least get an intuition about which set of parameters adapt more g_a, g_s, h_a and h_s.\n\n- Figs 5, 8 and 9. How are the curves aggregated for different images? Is it the mean for each rate value? Note that depending on how this is done it could be totally misleading.\n \t\n- It would be nice to include results from other methods (like the BPG and Rippel 2017) to compare with visually.\n\nRELATED WORK\n\nBalle et al. already published a work including a perceptual metric in the end-to-end training procedure, which I think is one of the main contributions of this work. Please include it in related work:\n\n\"End-to-end optimization of nonlinear transform codes for perceptual quality.\" J. Ballé, V. Laparra, and E.P. Simoncelli. PCS: Picture Coding Symposium, (2016) \n\nDISCUSSION\n\nFirst paragraphs of discussion section look more like a second section of \"related work\". \nI think it is more interesting if the authors discuss the relevance of putting effort into modelling hyperprior or the distribution of images (or transformation). Are these things equivalent? Or is there some reason why we can't include hyperprior modeling in the g_a transformation? For me it is not clear why we should model the distribution of outputs as, in principle, the g_a transformation has to enforce (using the training procedure) that the transformed data follow the imposed distribution. Is it because the GDN is not powerful enough to make the outputs independent? or is it because it is beneficial in compression to divide the problem into two parts? \n\nREFERENCES\n\n- Balle 2016 and Theis 2017 seem to be published in the same conference the same year. Using different years for the references is confusing.\n\n- There is something strange with these references\n\nBallé, J, V Laparra, and E P Simoncelli (2016). “Density Modeling of Images using a Generalized\nNormalization Transformation”. In: Int’l. Conf. on Learning Representations (ICLR2016). URL :\nhttps://arxiv.org/abs/1511.06281.\nBallé, Valero Laparra, and Eero P. Simoncelli (2015). “Density Modeling of Images Using a Gen-\neralized Normalization Transformation”. In: arXiv e-prints. Published as a conference paper at\nthe 4th International Conference for Learning Representations, San Juan, 2016. arXiv: 1511.\n06281.\n– (2016). “End-to-end Optimized Image Compression”. In: arXiv e-prints. 5th Int. Conf. for Learn-\ning Representations.\n\n", "The paper is a step forward for image deep compression, at least when departing from the (Balle et al., 2017) scheme.\nThe proposed hyperpriors are especially useful for medium to high bpp and optimized for L2/ PSNR evaluation.\n\nI find the description of the maths too laconic and hard to follow. For example, what’s the U(.|.) operator in (5)?\n\nWhat’s the motivation of using GDN as non linearity instead of e.g. ReLU?\n\nI am not getting the need of MSSSIM (dB). How exactly was it defined/computed?\n\nImportance of training data? The proposed models are trained on 1million images while others like (Theis et al, 2017) and [Ref1,Ref2] use smaller datasets for training.\n\nI am missing a discussion about Runtime / complexity vs. other approaches?\n\nWhy MSSSIM is a relevant measure? The Fig. 6 seem to show better visual results for L2 loss (PSNR) than when optimized for MSSSIM, at least in my opinion.\n\nWhat's the reason to use 4:4:4 for BPG and 4:2:0 for JPEG?\n\nWhat is the relation between hyperprior and importance maps / content-weights [Ref1] ?\n\nWhat about reproducibility of the results? Will be the codes/models made publicly available?\n\nRelevant literature:\n[Ref1] Learning Convolutional Networks for Content-weighted Image Compression (https://arxiv.org/abs/1703.10553)\n[Ref2] Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks (https://arxiv.org/abs/1704.00648)\n", "Hello,\n\nit seems that our inability to upload a revision was a glitch in the website. Once we sent our comments, a button labeled \"modifiable original\" appeared, which we used to officially upload the revision.\n\nWe confirmed that we are getting the updated version when clicking on the PDF symbol. However, it doesn't seem to appear in the revision history. We are unsure what the reason for this is.\n\nWe are leaving the Google Drive version online for now, so you can confirm that it is the same version as provided by OpenReview.\n\n- the authors\n", "Thank you for your comments, and thank you for bringing our oversight of defining N to our attention, which we corrected.\n\nWe have not yet optimized our compression method for runtime. However, we have taken care to match model capacities between the factorized-prior model and the hyperprior model, and not to choose a number of filters that would limit transform capacity (see the new section 6.3 in the appendix for details), which would limit the ability of the models to factorize the representation. Runtime is only a very crude proxy for model capacity, and as such, we agree that calibrating for runtime is useful in a deployment context, but not necessarily for establishing whether powerful priors, that are trained end-to-end, are a good thing or not. This was one of the main intuitions driving this paper, and we hope that the presentation of this paper is much clearer in the revision.\n\nAlthough not the main focus of our paper, we do think it would be useful to have runtime comparisons to other methods, and we are working on a method to get accurate measurements (despite all the difficulties associated with runtime measurements across different hardware). We hope to provide these in another revision. To give you an estimate for the current implementation: the factorized-prior model, which does not suffer from the naive entropy coding implementation mentioned above, can encode one of the Kodak images in ~70ms (corresponding to a throughput of ~5.5 megapixels per second). We estimate the hyperprior model to require a longer runtime, mostly due to h_a and h_s. Note that due to further subsampling, however, the complexity of h_a and h_s should be significantly lower than g_a and g_s.\n", "Thank you for the review and suggestions.\n\n> Lossy image compression using neural networks is a rapidly advancing field and of considerable interest to the ICLR community. I like the approach of using a hierarchical entropy model, which may inspire further work in this direction. It is nice to see that the variational approach may be able to outperform the more complicated state-of-the-art approach of Rippel and Bourdev (2017). That said, the evaluation is in terms of MS-SSIM and only the network directly optimized for MS-SSIM outperformed the adversarial approach of R&B. Since the reconstructions generated by a network optimized for MSE tend to look better than those of the MS-SSIM network (Figure 6), I am wondering if the proposed approach is indeed outperforming R&B or just exploiting a weakness of MS-SSIM. It would have been great if the authors included a comparison based on human judgments or at least a side-by-side comparison of reconstructions generated by the two approaches.\n\nThank you for thinking critically about distortion metrics. This is precisely one of the points we wanted to make with this paper - none of the metrics available today are perfect, and it is easy for ANN-based methods to overfit to whatever metric is used, resulting in good performance numbers but a loss of visual quality. That said, we would like to point out that neither we nor Rippel (2017) provide an evaluation based on human judgements. As such, it is unclear whether the adversarial loss they blend with an MS-SSIM loss is actually helping in terms of visual quality. Unfortunately, we can't systematically compare our method to theirs using human judgements, because they did not make their images available to us.\n\nRegarding MS-SSIM vs. squared loss, we think it depends on the image which one is visually better. Because MS-SSIM has been very popular, we wanted to show an example that is challenging for MS-SSIM. Note that many of the images shown in the appendix (side by side, one optimized for MS-SSIM and one for squared loss) are compressed to roughly similar bit rates, allowing a crude comparison (it is difficult in our current approach to match bit rates exactly). We lowered the bit rate of the images for this revision, to make the differences more visible.\n\n> It might be interesting to relate the entropy model used here to other work involving scale mixtures, e.g. the field of Gaussian scale mixtures (Lyu & Simoncelli, 2007).\n\nThanks, we included this reference.\n\n> Another interesting comparison might be to other compression approaches where scale mixtures were used and pixels were encoded together with scales (e.g., van den Oord & > Schrauwen, 2017).\n\nWe were unable to pinpoint this paper, could you please provide a more detailed reference?\n\n> The authors combine their approach using MS-SSIM as distortion. Is this technically still a VAE? Might be worth discussing.\n\nWe don't know, and unfortunately, currently don't have much to say about this point.\n\n> I did not quite follow the motivation for convolving the prior distributions with a uniform distribution.\n\nWe tried to improve the explanation in appendix 6.2.\n\n> – On page 3 the paper talks about “the true posterior” of a model which hasn’t been defined yet. Although most readers will not stumble here as they will be familiar with > VAEs, perhaps mention first that the generative model is defined over both $x$ and $\\tilde y$.\n\nWe hope to have fixed this with the current revision.\n\n> – Below Equation 2 it sounds like the authors claim that the entropy of the uniform distribution is zero independent of its width.\n\nThat was not our intention, and it should be fixed now.\n\n> – Equation 7 is missing some $\\tilde z$.\n\nFixed.\n\n> – The operational diagram in Figure 3 is missing a “|”.\n\nFixed.\n", "> -\"Since MS-SSIM yields values between 0 (worst) and 1 (best), and most of the compared methods achieve values well above 0.9, we converted the quantity to decibels in order to > improve legibility.\"\n> Are differences of MS-SSIM with this conversion significant? Is this transformation necessary, I lose the intuition. Besides, probably is my fault but I have not being able to > \"unconvert\" the dB to MS-SSIM units, for instance 20*log10(1)= 20 but most curves surpass this value.\n\nWe updated the paper to include the exact formula we used in the figure caption, thanks for pointing out this oversight. The rationale for using this transformation is that a difference of, say, 0.01 in the 0.99 MS-SSIM range is much more significant than the same difference around a value of 0.91, for example. In a plot, the difference becomes harder and harder to see the closer the values approach 1. The logarithm serves to provide a visually more balanced presentation.\n\n> - \"..., results differ substantially depending on which distortion metric is used in the loss function during training.\"\n> It would be informative to understand how the parameters change depending on the metric employed, or at least get an intuition about which set of parameters adapt more g_a, > g_s, h_a and h_s.\n\nWe agree that this would be interesting, but lack a good way of measuring it. We will likely do more research in this direction in the future.\n\n> - Figs 5, 8 and 9. How are the curves aggregated for different images? Is it the mean for each rate value? Note that depending on how this is done it could be totally > misleading.\n\nThank you for pointing this out. We have updated the paper to use interpolated rate aggregation for the MS-SSIM plots, in order to match Rippel (2017), and to use lambda-aggregation for the PSNR plots, in order to effectively compare to HEVC (the results did not change much). We discuss this in appendix 6.4.\n\n> - It would be nice to include results from other methods (like the BPG and Rippel 2017) to compare with visually.\n\nWe agree this would be desirable, but this is limited in practice as Rippel (2017) have not made their reconstructed images available to us. For visual comparisons, we need to match bit rates for the compared image, which is not easy given models trained for a discrete set of lambdas. (Many images provided in the appendix approximately match, but not all of them.) We’ll attempt to prepare a visual comparison to BPG for the final paper, if time permits, or make it available online later.\n\n> Balle et al. already published a work including a perceptual metric in the end-to-end training procedure, which I think is one of the main contributions of this work. Please > include it in related work:\n>\n> \"End-to-end optimization of nonlinear transform codes for perceptual quality.\" J. Ballé, V. Laparra, and E.P. Simoncelli. PCS: Picture Coding Symposium, (2016)\n\nThanks - we fixed this. Note that their results are quite limited, as they use block transforms which don't adapt to the data as well as deeper models.\n\n> First paragraphs of discussion section look more like a second section of \"related work\".\n> I think it is more interesting if the authors discuss the relevance of putting effort into modelling hyperprior or the distribution of images (or transformation). Are these things equivalent? Or is there some reason why we can't include hyperprior modeling in the g_a transformation? For me it is not clear why we should model the distribution of outputs as, in principle, the g_a transformation has to enforce (using the training procedure) that the transformed data follow the imposed distribution. Is it because the GDN is not powerful enough to make the outputs independent? or is it because it is beneficial in compression to divide the problem into two parts?\n\nWe think that it may be beneficial to divide the problem into two parts, as you say, and that our results provide a bit of evidence regarding that. However, we didn't do a good job of presenting this intuition in the first draft. We hope that the current revision is much clearer.\n\n> - Balle 2016 and Theis 2017 seem to be published in the same conference the same year. Using different years for the references is confusing.\n\nFixed.\n\n> - There is something strange with these references\n>\n> Ballé, J, V Laparra, and E P Simoncelli (2016). “Density Modeling of Images using a Generalized\n> Normalization Transformation”. In: Int’l. Conf. on Learning Representations (ICLR2016). URL :\n> https://arxiv.org/abs/1511.06281.\n> Ballé, Valero Laparra, and Eero P. Simoncelli (2015). “Density Modeling of Images Using a Gen-\n> eralized Normalization Transformation”. In: arXiv e-prints. Published as a conference paper at\n> the 4th International Conference for Learning Representations, San Juan, 2016. arXiv: 1511.\n> 06281.\n> – (2016). “End-to-end Optimized Image Compression”. In: arXiv e-prints. 5th Int. Conf. for Learn-\n> ing Representations.\n\nFixed.\n", "> - The example in Fig. 2 is extremely important to understand the motivation behind the hyperprior but I think it needs a little more explanation. This example is so important > that it may need to be explained at the beginning of the work. Is this a real example, of a model trained without and with normalization? If so specify it please.\n\nYes, this is a real example. We made the description more precise, and we hope that our edits to the main text helped to convey the motivation better.\n\n> Why is GDN not able to eliminate these spatial dependencies? Would these dependencies be eliminated if normalization were applied between spatial coefficients? Could you remove dependencies with more layers or different parameters in the GDN?\n\nWe think that GDN is capable of removing more dependencies than what we observe remain, but a certain amount of dependency may actually be desirable in the context of rate--distortion optimization. Unfortunately, it's impossible to fully control for all other possible causes of the remaining statistical dependencies, but we are interested in researching this further.\n\n> - TYPO \"...from the center pane of...\"\n\nFixed.\n\n> - \"...and propose the following extension of the model (figure 3):\" there is nothing after the colon. Maybe there is something missing, or maybe it should be a dot instead of > a colon. However to me there is a lack of explanation about the model.\n\nFixed.\n\n> - \"...,the probability mass functions P_ŷi need to be constructed “on the fly”...\"\n> How computationally costly is this?\n\nWe are investigating this currently. Our implementation at this point is naive, in that it pre-generates the probability tables and fully stores them in memory before doing the arithmetic coding. The memory requirements can be substantial, slowing the process down artificially. A better way would be to inline these computations. We're also working on a method to collect accurate timing data, and will update the paper once we have them.\n\n> - \"...batch normalization or learning rate decay were found to have no beneficial effect (this may be due to the local normalization properties of GDN, which contain global > normalization as a special case).\"\n> This is extremely interesting. I see the connection for batch normalization, but not for decay of the learning rate. Please, clarify it. Does this mean that when using GDN > instead of regular nonlinearity we no longer need to use batch normalization? Or in other words, do you think that batch normalization is useful only because it is special > case of GSN? It would be useful for the community to assess what are the benefits of local normalization versus global normalization.\n\nWe think that GDN has the potential to subsume the effects of batch normalization, as it implements local normalization, which is a generalization of global normalization. The Tensorflow implementation of GDN uses different default constants compared to the one described by Ballé (2017), which we suspect may have something to do with the fact that we couldn't get much gains out of applying a learning rate decay. However, this is speculative, and we are still researching these effects.\n\n> - \"...each of these combinations with 8 different values of λ in order to cover a range of rate–distortion tradeoffs.\"\n> Would it be possible with your methods including \\lambda as an input and the model parameters as side information?\n\nYes, we could treat lambda as side information and have the decoder switch between different sets of model parameters based on that. All that would be required is an encoding scheme for lambda.\n\n> - I guess you included the side information when computing the total entropy (or number of bits), was there a different way of compressing the image and the side information?\n\nYes, the reported rates are total bit rates for encoding y and z. We included a new figure in the experimental results section to show the fraction of side information compared to total bit rate.\n\n> - Using the same metrics to train and to evaluate is a little bit misleading. Evaluation plots using a different perceptual metric would be helpful.\n\nWhy do you think it is misleading to train and evaluate on the same metric, could you elaborate?\n", "Thank you for the review and suggestions.\n\n> I have two main concerns about motivation that are related. The first refers to hyperprior motivation. It is not clear why, if GDN was proposed to eliminate statistical > dependencies between pixels in the image, the main motivation is that GDN coefficients are not independent. Perhaps this confusion could be resolved by broadening the > explanation in Figure 2. My second concern is that it is not clear why it is better to modify the probability distribution for the entropy encoder than to improve the GDN model> . I think this is a very interesting issue, although it may be outside the scope of this work. As far as I know, there is no theoretical solution to find the right balance > between the complexity of transformation and the entropy encoder. However, it would be interesting to discuss this as it is the main novelty of the work compared to other > methods of image compression based on deep learning.\n\nThank you very much for pointing this out! Our intention was to enable factorization of the latent representation as much as possible. However, the hyperprior models still significantly outperform the factorized prior models. We think of that result as an indication that statistical dependencies in the latent representation, at least for compression models, may actually be desirable. Some of our intuitions were not conveyed well in the original draft. We have rewritten large parts of the paper to make this much clearer. Please refer to the revised discussion, as well as the new section 6.3 in the appendix for details.\n\n> -\"...because our models are optimized end-to-end, they minimize the total expected code length by learning to balance the amount of side information with the expected > improvement of the entropy model.\"\n> I think this point is very interesting, it would be good to see some numbers of how this happens for the results presented, and also during the training procedure. For > example, a simple comparison of the number of bits in the signal and side information depending on the compression rate or the number of iterations during model training.\n\nWe included a new plot about this, and a paragraph describing it, in the experimental results section. Generally, the amount of side information used is very low compared to the total bit rate.\n\n> - There is something missing in the sentence: \"...such as arithmetic coding () and transmitted...\"\n\nFixed.\n\n> - Fig1. To me it is not clear how to read the left hand schemes. Could it be possible to include the distributions specifically? Also it is strange that there is a \\tiled{y} > in both schemes but with different conditional dependencies. Another thing is that the symbol ψ appears in this figure and is not used in section 2.\n\nThe schemes on the left hand are \"graphical models\" that are quite standard in the literature on Bayesian modeling (for instance, refer to \"Pattern Recognition and Machine Learning\" by Christopher Bishop). They are not crucial for the understanding of the paper, but might provide a quick overview for someone familiar with them. Unfortunately, we think there isn't enough space in the paper to provide more detail. Regarding the symbol psi, we reordered sections 2 and 3 to address the problem.\n\n> - It would be easier to follow if change the symbols of the functions parameters by something like \\theta_a and \\theta_s.\n\nWe are following an established convention in the VAE literature to name the parameters of the generative model theta and the parameters of the inference model phi. We understand that this may decrease readability for people with other backgrounds, but currently we think this is the best solution.\n\n> - \"Distortion is the expected difference between...\" Why is the \"expected\" word used here?\n\nThis is meant in the sense of taking the expectation of the difference over the data distribution. We tried to clarify this in the current revision and hope it is clearer now.\n\n> - \"...and substituting additive uniform noise...\" is this phrase correct? Are authors is Balle 2016 substituting additive uniform noise?\n\nYes, that is correct.\n\n> - In equation (1), is the first term zero or constant? when talking about equation (7) authors say \"Again, the first term is constant,...\".\n\nThey are zero *and* constant in both cases. We changed the language to be more precise.\n\n> - The sentence \"Most previous work assumes...\" sounds strange.\n\nWe rewrote parts of the paper, which should have fixed this.\n", "\n> Why MSSSIM is a relevant measure? The Fig. 6 seem to show better visual results for L2 loss (PSNR) than when optimized for MSSSIM, at least in my opinion.\n\nMS-SSIM is a widely used image quality index, and has been popular in previous papers presenting ANN-based compression methods. We wanted to understand how visual quality differs when optimizing for different metrics. The image we show here represents a challenge for MS-SSIM, which we felt was important to talk about given how popular the metric is. Other images, such as Kodak 15, tend to be more challenging for squared-error optimized models. Note that in this revision of the paper, we lowered the bit rates of the example images in the appendix, to make artifacts more visible and to demonstrate this effect more clearly across a range of different images.\n\n> What's the reason to use 4:4:4 for BPG and 4:2:0 for JPEG?\n\nBecause we optimized our models for squared error in the RGB representation (rather than a luma--chroma colorspace), BPG 4:4:4 is the appropriate method to compare to, as it optimizes the same metric. With respect to JPEG, the 4:4:4 format is not widely used, and we found that it also appears to perform much worse than 4:2:0 (indicating it may not have been optimized as well).\n\n> What is the relation between hyperprior and importance maps / content-weights [Ref1] ?\n\nThe importance maps of Li et al. (2017) are primarily designed to provide an embedded code (i.e., a compressed representation of the image which allows accessing lower-quality versions of the image by decoding only a part of the bitstream). To do this, they employ binary quantization rather than integer (i.e. multi-level) quantization, among other techniques. Their entropy model corresponds to a Markov-style prior, similar to the ones used in Johnston et al. (2017) and Rippel et al. (2017).\n\n> What about reproducibility of the results? Will be the codes/models made publicly available?\n\nWe are striving to publish at least the full results and parts of the code/model parameters, but due to possible legal constraints, we cannot make any promises at this point. We hope that the description in the paper is self-contained and detailed enough to be useful. We're also happy to answer any further questions.\n", "Thank you for the review and suggestions.\n\n> I find the description of the maths too laconic and hard to follow. For example, what’s the U(.|.) operator in (5)?\n\nWe have completely rewritten some sections of the paper in order to improve clarity. U(.|.) indicates a uniform distribution (this is stated in the text). We hope that the paper is now easier to follow. Please let us know if there are any other (or new) parts which you find hard to read, we are happy to make further improvements.\n\n> What’s the motivation of using GDN as non linearity instead of e.g. ReLU?\n\nWe have found that GDN nonlinearities, while keeping all other architecture parameters constant, provides significantly better performance than ReLU in g_a and g_s. We haven't done any systematic experiments regarding nonlinearities used in h_a and h_s, and went with ReLU as a \"default\" choice (note that the amount of side information overall is very small, so we might not benefit much by optimizing this part of the model).\n\n> I am not getting the need of MSSSIM (dB). How exactly was it defined/computed?\n\nMS-SSIM is defined in Wang, Simoncelli, et al. (2003). It is one of the most widely used perceptual image quality metrics. Thank you for pointing out that we didn’t define how we converted to decibels. We included the exact definition in the figure caption.\n\n> Importance of training data? The proposed models are trained on 1million images while others like (Theis et al, 2017) and [Ref1,Ref2] use smaller datasets for training.\n\nWe think this can likely be ruled out as a source of performance gains. Compared to the factorized-prior model in Ballé (2017), which was trained on ~7000 images and squared error, our squared-error factorized-prior model matches its performance on PSNR (figure 10) and even underperforms on MS-SSIM (figure 11).\n\n> I am missing a discussion about Runtime / complexity vs. other approaches?\n\nOur main goal here was to optimize for compression performance, and to control for the effect of capacity limitations of the model (as a result of fewer filters), which may cause unnecessary statistical dependencies in the representation. We realize this intention wasn't sufficiently clear in our first draft, which has lead to some confusion. We have rewritten parts of the paper, added a paragraph to the discussion, and added a supporting section in the appendix (6.3) to clarify.\n\nComparing the runtime of the encoding and decoding process is important when evaluating compression methods for deployment. To make a fair comparison, all of the components involved must be appropriately optimized, which has not been a priority in our research so far. In particular, we have only implemented the arithmetic coder in a naive way, writing a very large probability table in memory, which is simple to implement, but unnecessarily slows down the computation. An optimized implementation would inline the computation of the probability tables in eq. (11). Some idea of complexity can be gathered from the architecture. Unfortunately, we omitted the number of filters in the transforms, which was an oversight. We now state this in the caption of the figure showing the architecture.\n\nWe are working on improving our methods to make accurate runtime measurements, and will be happy to provide them in the final paper or here, as soon as we have them. To give you an estimate for the current implementation: the factorized-prior model, which does not suffer from the naive implementation mentioned above, can encode one of the Kodak images in ~70ms (corresponding to a throughput of ~5.5 megapixels per second). We estimate the hyperprior model to require a longer runtime, mostly due to h_a and h_s. Note that due to further subsampling, however, the complexity of h_a and h_s should be significantly lower than g_a and g_s.\n", "Dear commenters and reviewers,\n\nthank you for your detailed critique of our paper. We have worked hard to revise our paper and address all of the points you have raised. Unfortunately we cannot currently upload an official revision. There seems to be some contradicting information whether revisions are allowed during the review process (http://www.iclr.cc/doku.php?id=iclr2018:conference_cfp indicates yes, https://openreview.net/group?id=ICLR.cc/2018/Conference indicates no). We have worked under the assumption that they are, and we think that sharing our revision is crucial to provide you with more data that helps us make our points. Therefore, we have shared our revision temporarily (and anonymously) at https://drive.google.com/file/d/1-gP0iFJtgqZ-DIm4kkOYKKz5mpCbm4uD/view. We are contacting the organizers about this issue, and will update the official revision as soon as we are able to.\n\nThank you - the authors.", "The authors compare their approach to a number of existing compression algorithms. However, for a fair comparison to these, the authors must calibrate for the same runtime. \n\nAs it currently stands, it is impossible to disambiguate the factors driving the approach: is it in fact a better architecture for compression, or simply a large number of filters per layer? In the paper, it is mentioned that \"N filters\" are used per layer, but N is not mentioned anywhere: what is N? What is the runtime of the algorithm? What is the number of multiplications per pixel? \n\nIn compression, speed is critical to ensure viability. Based on my understanding, this is a constraint that many of the approaches compared against have been taking into consideration. As such, for an appropriate comparison speed must be taken into account." ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkcQFMZRb", "iclr_2018_rkcQFMZRb", "iclr_2018_rkcQFMZRb", "rJqPCtZ7z", "Syy1kl8ez", "B1i1F5uxz", "SkZGkFFxG", "SkZGkFFxG", "SkZGkFFxG", "ryY3n25gG", "ryY3n25gG", "iclr_2018_rkcQFMZRb", "iclr_2018_rkcQFMZRb" ]
iclr_2018_H1kG7GZAW
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality).
accepted-poster-papers
Thank you for submitting you paper to ICLR. The reviewers are all in agreement that the paper is suitable for publication, each revising their score upwards in response to the revision that has made the paper stronger. The authors may want to consider adding a discussion about whether the simple standard Gaussian prior, which is invariant under transformation by an orthogonal matrix, is a sensible one if the objective is to find disentangled representations. Alternatives, such as sparse priors, might be more sensible if a model-based solution to this problem is sought.
test
[ "SkF57lqez", "SydFTsFkG", "r1YT_Wtxf", "BkHttn74M", "SyAI1RfEM", "HyhMdTc7G", "BJdehEFXG", "HkbQ3I_mz", "SJ3gvydXz", "HyOiU1OXz", "Sy_I8yuXf", "SyEK7AHez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "******\nUpdate: revising reviewer score to 6 after acknowledging revisions and improved manuscript\n******\n\nThe authors propose a new regularization term modifying the VAE (Kingma et al 2013) objective to encourage learning disentangling representations.\nSpecifically, the authors suggest to add penalization to ELBO in the form of -KL(q(z)||p(z)) , which encourages a more global criterion than the local ELBOs.\nIn practice, the authors decide that the objective they want to optimize is unwieldy and resort to moment matching of covariances of q(z) and p(z) via gradient descent.\nThe final objective uses a persistent estimate of the covariance matrix of q and upgrades it at each mini-batch to perform learning.\n\nThe authors use this objective function to perform experiments measuring disentanglement and find minor benefits compared to other objectives in quantitative terms.\n\nComments:\n1. The originally proposed modification in Equation (4) appears to be rigorous and as far as I can tell still poses a lower bound to log(p(x)). The proof could use the result posed earlier: KL(q(z)||p(z)) is smaller than E_x KL(q(z|x)||p(z|x)).\n2. The proposed moment matching scheme performing decorrelation resembles approaches for variational PCA and especially independent component analysis. The relationship to these techniques is not discussed adequately. In addition, this paper could really benefit from an empirical figure of the marginal statistics of z under the different regularizers in order to establish what type of structure is being imposed here and what it results in.\n3. The resulting regularizer with the decorrelation terms could be studied as a modeling choice. In the probabilistic sense, regularizers can be seen as structural and prior assumptions on variables. As it stands, it is unnecessarily vague which assumptions this extra regularizer is making on variables.\n4. Why is using the objective in Equation (4) not tried and tested and compared to? It could be thought that subsampling would be enough to evaluate this extra KL term without any need for additional variational parameters \\psi. The reason for switching to the moment matching scheme seems not well motivated here without showing explicitly that Eq (4) has problems.\n5. The model seems to be making on minor progress in its stated goal, disentanglement. It would be more convincing to clarify the structural properties of this regularizer in a statistical sense more clearly given that experimentally it seems to only have a minor effect.\n6. Is there a relationship to NICE (Laurent Dinh et al)?\n7. The infogan is also an obvious point of reference and comparison here.\n8. The authors claim that there are no models which can combine GANs with inference in a satisfactory way, which is obviously not accurate nowadays given the progress on literature combining GANs and variational inference.\n\nAll in all I find this paper interesting but would hope that a more careful technical justification and derivation of the model would be presented given that it seems to not be an empirically overwhelming change.", "This paper describes DIP-VAE, an improvement on the beta-VAE framework for learning a disentangled representation of the data generative factors in the visual domain. The authors propose to augment the standard VAE ELBO objective with an extra term that minimises the covariance between the latents. Unlike the original beta-VAE objective which implicitly minimises such covariance individually for each observation x, the DIP-VAE objective does so while marginalising over x. This difference removes the tradeoff between reconstruction quality and disentangling reported for beta-VAE, since DIP-VAE maintains sensitivity of q(z|x) to each observation x, and hence achieves disentangling while preserving the sharpness of reconstructions.\n\nPros:\n- the paper is well written\n- it makes a contribution to an important line or research (unsupervised disentangled representation learning)\n- the covariance minimisation proposed in the paper looks like an easy to implement yet impactful change to the VAE objective to encourage disentanglement while preserving reconstruction quality\n- it directly compares the performance of DIP-VAE to that of beta-VAE showing significant improvements in terms of disentangling metric and reconstruction error\n\nCons:\n- I am yet to be fully convinced how well the approach works. Table 1 and Figure 1 look good, but other figures are either tangental to the main point of the paper, or impossible to read due to the small scale. For example, the qualitative evaluation of the latent traversals is almost impossible due to the tiny scale of Table 5 (shouldn't this be a Figure rather than a Table?)\n- The authors concentrate a lot on the CelebA dataset, however I believe the comparison with beta-VAE would be a lot clearer on the dSprites dataset (https://github.com/deepmind/dsprites-dataset) (the authors call it 2D shapes). I would like to see latent traversals of the best DIP-VAE vs beta-VAE to demonstrate good disentangling and the improvements in reconstruction quality. This (and larger Table 5) would be better use of space compared to Table 2 and Figure 2 for example, which I feel are somewhat tangental to the main message of the paper and are better suited for the appendix. \n- I wonder how the authors calculated the disentanglement metric on CelebA, given that the ground truth attributes in the dataset are often rather qualitative (e.g. attractiveness), noisy (many can be considered an inaccurate description of the image), and often do not align with the data generative factors discoverable through unsupervised modeling of the data distribution\n- Table 3 - the legends for the axes are too small and are impossible to read. Also it would be helpful to normalise the scales of the heat plots in the second row.\n- Table 3 - looking at the correlations with the ground truth factors, it seems like beta-VAE did not actually disentangle the latents. Would be nice to see the corresponding latent traversal plots to ensure that the baseline is actually trained well. \n\nI am willing to increase my score for the paper if the authors can address my points. In particular I would like to see a clear comparison in terms of latent traversals on dSprites between beta-VAE and DIP-VAE models presented in Table 3. I would also like to see where these particular models lie in Figure 1.\n\n---------------------------\n---- UPDATE ----------\n---------------------------\nI have increased my score after reading the revised version of the manuscript.", "########## UPDATED AFTER AUTHOR RESPONSE ##########\n\nThanks for the good revision and response that addressed most of my concerns. I am bumping up my score. \n\n###############################################\n\n\nThis paper presents a Disentangled Inferred Prior (DIP-VAE) method for learning disentangled features from unlabeled observations following the VAE framework. The basic idea of DIP-VAE is to enforce the aggregated posterior q(z) = E_x [q(z | x)] to be close to an identity matrix as implied by the commonly chosen standard normal prior p(z). The authors propose to moment-match q(z) given it is hard to minimize the KL-divergence between q(z) and p(z). This leads to one additional term to the regular VAE objective (in two parts, on- and off-diagonal). It has the similar property as beta-VAE (Higgins et al. 2017) but without sacrificing the reconstruction quality. Empirically the authors demonstrate that DIP-VAE can effectively learn disentangled features, perform comparably better than beta-VAE and at the same time retain the reconstruction quality close to regular VAE (beta-VAE with beta = 1). \n\nThe paper is overall well-written with minor issues (listed below). I think the idea of enforcing an aggregated (marginalized) posterior q(z) to be close to the standard normal prior p(z) makes sense, as opposed to enforcing each individual posterior q(z|x) to be close to p(z) as (beta-)VAE objective suggests. I would like to make some connection to some work on understanding VAE objective (Hoffman & Johnson 2016, ELBO surgery: yet another way to carve up the variational evidence lower bound) where they derived something along the same line of an aggregated posterior q(z). In Hoffman & Johnson, it is shown that KL(q(z) | p(z)) is in fact buried in ELBO, and the inequality gap in Eq (3) is basically a mutual information term between z and n (the index of the data point). Similar observations have led to the development of VAMP-prior (Tomczak & Welling 2017, VAE with a VampPrior). Following the derivation in Hoffman & Johnson, DIP-VAE is basically adding a regularization parameter to the KL(q(z) | p(z)) term in standard ELBO. I think this interpretation is complementary to (and in my opinion, more clear than) the one that’s described in the paper. \n\nMy concerns are mostly regarding the empirical studies: \n\n1. One of my main concern is on the empirical results in Table 1. The disentanglement metric score for beta-VAE is suspiciously lower than what’s reported in Higgins et al., where they reported a 99.23% disentanglement metric score on 2D shape dataset. I understand the linear classier is different, but still the difference is too large to ignore. Hence my current more neutral review rating. \n\n2. Regarding the correlational plots (the bottom row of Table 3 and 4), I don’t think I can see any clear patterns (especially on CelebA). I wonder what’s the point of including them here and if there is a point, please explain them clearly in the paper. \n\n3. Figure 2 is also a little confusing to me. If I understand the procedure correctly, a good disentangled feature would imply smaller correlations to other features (i.e., the numbers in Figure 2 should be smaller for better disentangled features). However, looking at Figure 2 and many other plots in the appendix, I don’t think DIP-VAE has a clear win here. Is my understanding correct? If so, what exactly are you trying to convey in Figure 2? \n\nMinor comments: \n\n1. In Eq (6) I think there are typos in terms of the definition of Cov_q(z)(z)? It appears as only the second term in Eq (5). \n\n2. Hyperparameter subsection in section 3: Shouldn’t \\lambda_od be larger if the entanglement is mainly reflected in the off-diagonal entries? Why the opposite? \n\n3. Can you elaborate on how a running estimate of Cov_p(x)(\\mu(x)) is maintained (following Eq (6)). It’s not very clear at the current state of the paper. \n\n4. Can we have error bars in Table 2? Some of the numbers are possibly hitting the error floor. \n\n5. Table 5 and 6 are not very necessary, unless there is a clear point. ", "Thanks for taking time to look at the revised version! ", "Following the response and revision, I am bumping up the score. \n\nRegarding my comment about the \"inequality gap in Eq (3)\", I indeed meant the gap between KL(q(z|x) || p(z)) and KL(q(z) || p(z)), not Eq (3). ", "Thank you for taking time to look at the revised version! We added the following text at the end of Sec 3 discussing the scenario you mentioned: \n\"Further, a low SAP score does not rule out good disentanglement in cases when two (or more) latent dimensions might be correlated strongly with the same generative factor and poorly with other generative factors. The generated examples using single latent traversals may not be realistic for such models, and DIP-VAE discourages this from happening by enforcing decorrelation of the latents. However, the SAP score computation can be adapted to such cases by grouping the latent dimensions based on correlations and getting the score matrix at group level, which can be fed as input to the second step to get the final SAP score. \"\n\nWe have also updated the dSprites reference. Thanks!", "Thank you for submitting the revised version of your manuscript. I am happy to increase my score given the new SAP metric and the improved plots.\n\nMinor final comments:\n\n1) It would be good to include a short discussion at the end of the newly added SAP metric section that talks about how this metric might apply to 'distributed disentangled representations', where a single latent factor might be encoded by an independent set of latents units (e.g. if latent units z_{1-3} encode position x, units z_{4-6} encode position y etc).\n\n2) It might be good to update the reference to the dSprites dataset to the one suggested here: https://github.com/deepmind/dsprites-dataset\n\nThank you!\n\n", "Please take a look at the revised version for Eq 6 and another variant. \n\nInfoGAN does not have a trained inference mechanism for real observations. Though it has a network Q (shared with discriminator) that implicitly minimizes E_{x~G(z)} KL(p(z|x) || q(z||x)), this is targeted towards inference for fake examples from the generator. [Higgins et al, 2017] compare \\beta-VAE with InfoGAN finding that \\beta-VAE outperforms it, and one of the reasons could be the lack of a true inference model. However, your point regarding continuous vs discrete latents is valid to some extent as reparameterization trick doesn't work with discrete variables. One can use Gumbel-Softmax / Concrete distribution to approximate a discrete distribution for which reparameterization can be used but we don't explore this in our current work. ", "Thank you for the careful reading of the paper and thoughtful comments! \n\n[[Lower bound to log p(x):]]\nThe proposed objective is indeed a lower bound to the evidence log p(x). However the proof is really trivial and doesn’t need to use Inequality (3). It can simply be shown by using nonnegativity of the distance between q(z) and p(z) (KL or any other divergence) -- since standard ELBO is a lower bound, subtracting any nonnegative quantity is also a lower bound. \n\n[[Optimizing objective (4):]]\nOptimizing (4) with KL(q(z)||p(z)) ( = E_{q(z)} log (q(z)/p(z))) is not tractable with sampling because of the \\log q(z) term (q(z)=E_{p(x)} q(z|x)). One possibility (as we discuss in the paper) is to use variational form of KL as first proposed in (Nguyen et al., 2010) and later in (Nowozin et al., 2016) for GANs, however it will involve training a separate “discriminator” that is expected to approximate the KL divergence at every iteration, which can be minimized using backprop. We have tried this recently and found that the covariance matrix of z~q(z) has significant off-diagonal entries (ie, it does not decorrelate the latents well). \n\n[[Structural properties / assumptions in the regularizer:]]\nThe proposed regularizer KL(q(z)||p(z)) is trying to match the inferred prior q(z) to the hypothesized prior p(z), which will automatically happen if p_\\theta(x) is close to p(x) (the model is good) and q(z|x) is close to p(z|x). We have also discussed this in the paragraph just after Eq (3). The regularizer is trying to enforce this explicitly which is totally natural (unless the hypothesized prior itself is too unreasonable). \n\n[[Relationship to NICE (Dinh et al, 2016):]]\nNICE is another framework to do density estimation with latent variables where encoder and decoder are exact inverses of each other by design (encoder maps from data distribution to a factored distribution of same dimensionality following a specific architecture that allows for easy inverse computation, and hence easy sampling as well as tractable maximum likelihood based learning). It can be compared/contrasted with the VAE which our proposed method is based upon -- eg, the reconstruction error is zero by design. We restrict ourselves to the VAE in this work. \n\n[[Comparison with InfoGAN:]]\nUnlike VAE, InfoGAN does not have a trained inference mechanism for real observations. Though it has a network Q (shared with discriminator) that implicitly minimizes E_{x~G(z)} KL(p(z|x) || q(z||x)), this is targeted towards inference for fake examples from the generator. [Higgins et al, 2017] compare \\beta-VAE with InfoGAN finding that \\beta-VAE outperforms and one of the reasons could be the lack of a true inference model. \n\n[[Inference in GANs:]]\nWe are aware of ALI [Dumoulin et al, ‎2017] and BiGAN [Donahue et al, 2017] which still suffer from bad reconstruction (D(E(x)) is far from x) as observed in [Kumar et al, 2017]. There is also a recent work [Arora et al, 2017] showing that the encoder in ALI/BiGAN may potentially learn non-informative codes. \n\n[Arora et al, 2017] Theoretical limitations of Encoder-Decoder GAN architectures, arXiv:1711.02651 2017\n[Dumoulin et al, ‎2017] Adversarially learned inference, ICLR 2017\n[Donahue et al, 2017] Adversarial feature learning, ICLR 2017\n[Kumar et al, 2017] Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference, NIPS 2017\n", "Thank you for the careful reading of the paper and thoughtful comments! We were not aware of the work [Hoffman & Johnson 2016, ELBO Surgery] and it looks like the method can also be motivated from that perspective. However we are not sure about your note “the inequality gap in Eq (3) is basically a mutual information term between z and n (the index of the data point)” -- it looks like the gap between KL(q(z|x)||p(z)) and KL(q(z)||p(z)) is the mutual information term between z and n, whereas the Inequality (3) compares KL(q(z|x)||p(z|x)) and KL(q(z)||p(z)). \n\n[[Disentanglement metric score on \\beta-VAE:]]\nWe found out that \\beta-VAE for \\beta=60 was not trained to convergence which now gives the best metric score for \\beta-VAE on 2DShapes data (95.7%). We have updated the Table 1 and Figure 1 with new results. [Higgins et al, 2017] report 99.23% score on 2D shape which is close to what we get. Apart from the linear classifier, the difference could also be due to the evaluation protocol where in [Higgins et al, 2017] that trained 30 \\beta-VAE models with different random seeds and “discarded the bottom 50% of the thirty resulting scores and reported the remaining results” (quoting verbatim from [Higgins et al, 2017]). We also discovered in this duration that the metric proposed in [Higgins et al, 2017] is not a good indicator of disentanglement seen in the latent traversal plots (ie, decoder’s output by varying one latent while fixing others). We added a short section (Sec 3) on the new metric we propose (referred as Separated Attribute Predictability or SAP score) which is much better aligned with the subjective disentanglement we see in the latent traversals. We have also added plots for SAP score vs reconstruction error (Fig 1 and 2). \n\n[[Correlation plots:]]\nWe agree that these plots were not conveying any insights or quantitative measure for disentanglement. We have omitted them in the revised version. \n\n[[Fig 2 in the submitted version:]]\nAs CelebA dataset has many ground truth attributes which are correlated with each other, it is not possible to infer different dimensions of latents capturing these (at least with the current approaches). Through this plot we were trying to show that the top attributes corresponding to a given dimension are semantically more similar for our method compared to the baselines. As you rightly noticed this is a subjective question so we have omitted these plots in the revised version.\n\n[[Cov_{q(z)}[z] in Eq 6:]]\nThe first term in Cov_{q(z)}[z] in Eq 5 is a diagonal matrix (expectation of variance of variational posterior, which is a Gaussian with diagonal covariance) and contributes only to the variances of z~q(z), so in the regularizer we had considered only the second term Cov_{p(x)} [\\mu(x)] which is a dense square matrix. However we have now included another variant (DIP-VAE-II) where the regularizer uses complete Cov_{q(z)}[z]. This actually provides better results on 2D Shapes data. \n\n[[Hyperparameters \\lambda_od and \\lambda_d:]]\nWe have included a discussion on this in the paragraph after Eq 5. Essentially, penalizing the off-diagonal entries of Cov_{p(x)} [\\mu(x)] also ends up reducing the diagonals of this matrix as off-diagonal are really derived from the diagonals (product of square root of diagonals for each example followed by averaging over examples). Hence holding the diagonals to a fixed value was important. We found that \\lambda_d > \\lamda_od was better for decreasing the covariance without impacting the variance. \n\n\n\n\n[[Running estimate of Cov_p(x)(\\mu(x)):]]\nIf the estimate using the current minibatch is B and the previous cumulative estimate is C, we take a combination B + a*C with ‘a’ being the inertia parameter (0.95 or so) and then normalize by (1/1-a). C is treated as constant while backpropagating the gradients. \n", "Thank you for the careful reading of the paper and thoughtful comments! Your comments encouraged us to look carefully into the alignment of Disentanglement metric score of [Higgins et al, 2017] (which we refer to as “Z-diff score” in the revised version) with the quality of latent traversal plots. We found that it is not a reliable indicator of disentanglement, more so in the case of 2D Shapes (dsprites) data, and we were so far using the Z-diff score to pick the best model. We have taken this time to revise the paper as follows:\n\n[[Latent traversal plots, along with a new metric for measuring disentanglement -- SAP Score:]]\nWe added a small section (Sec 3) describing a novel metric for measuring disentanglement, referred as Separated Attribute Predictability (SAP) score. It works by fitting a slope/intercept (for regression) or a threshold (real number, for classification) to predict each of the known generative factors (or attributes) using each latent dimension individually. This gives us a matrix S of size (# latent-dims x # attributes) indicating goodness of linear fit of the generative factors to the individual latent dimensions. For each attribute (column of S) we take the difference of top two scores and average these for all attributes to get the final SAP score. We observe that this score is a much better indicator of disentanglement seen in the decoder’s output for the single latent traversals. It is also easier to compute and does not involve training any classifier. We present the plots of both Z-diff score and SAP score vs the reconstruction error in the revised version (Fig. 1 and 2), along with the latent traversal plots for 2D Shapes data (Fig. 3 and 4). \nIt is clear from Fig 3 (and from Fig 1 SAP scores vs reconstruction) that VAE is the worst in terms of disentanglement. \\beta-VAE (beta=4) improves over it for disentanglement but DIP-VAE-II (lambda=5) outperforms it with better disentanglement and reconstruction quality. \\beta-VAE (beta=60) provides even better disentanglement (Fig 3) but scale remains entangled with X-pos and Y-pos. DIP-VAE-II (lambda=500) yields less reconstruction error and better disentanglement in latent traversals with less entangling b/w scale and Y-pos/ X-pos. \n\n[[Two variants of DIP-VAE:]] \nWe have added another variant (DIP-VAE-II, Eq. 7) in the revised version that yields better SAP-score / reconstruction-error tradeoff (and latent traversals) for 2D Shapes data (please see Fig 1). Note that in terms of Z-diff score / reconstruction error trade-off, DIP-VAE-I is still better than DIP-VAE-II for 2D Shapes data, which led us to conclude that Z-diff score is not a good metric for disentanglement.\n\n[[Disentanglement metric (Z-diff score) for CelebA:]]\nWe compute this score for CelebA in the same manner as for 2D Shapes. Although some of the CelebA attributes cannot be taken as “generative factors”, we still observe that disentanglement metric scores are correlated well with the disentanglement seen in the latent traversal plots.\n\n[[Correlation plots in the submitted version:]]\nThe label-correlation plots shown in the submitted version for \\beta-VAE (beta=4 and 20) were for the fully-trained models after convergence (as can be seen in Fig 3, the generated images from the decoder are good). For these \\beta values, the latent dimensions are indeed entangled as also reflected in the SAP score (Fig 1 in the revised version) and in the latent traversal plots (Fig 3 in the revised version). However, we later found out that in the submitted version, \\beta-VAE for \\beta=60, was not trained to convergence. We have fixed it in the revised version and \\beta=60 gives the best SAP and Z-diff scores for \\beta-VAE along with good disentanglement in the latent traversals (Fig 3), although with bad reconstruction quality. We have omitted correlation plots in the revised version as SAP score already captures it as the explained variance of linear regression fit. \n", "i'm wondering why you have omitted the \\Sigma_phi(x) term in equation (6). Is this a typo?\n\nMoreover, isn't it worth comparing to InfoGAN (Chen et al, 2016), which seems to be competitive against beta-VAE at least for cases where you have both discrete and continuous factors of variation? As far as I'm aware, InfoGANs are designed to work with joint continuous and discrete latents, whereas beta-VAE only uses continuous latents so less well-suited for capturing discrete factors of variation. You seem to use binary attributes in the celebA dataset for computing the measure of disentanglement (bangs/male/beard/hat)." ]
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1kG7GZAW", "iclr_2018_H1kG7GZAW", "iclr_2018_H1kG7GZAW", "SyAI1RfEM", "HyOiU1OXz", "BJdehEFXG", "Sy_I8yuXf", "SyEK7AHez", "SkF57lqez", "r1YT_Wtxf", "SydFTsFkG", "iclr_2018_H1kG7GZAW" ]
iclr_2018_rJNpifWAb
Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.
accepted-poster-papers
Thank you for submitting you paper to ICLR. The idea is simple, but easy to implement and effective. The paper examines the performance fairly thoroughly across a number of different scenarios showing that the method consistently reduces variance. How this translates into final performance is complex of course, but faster convergence is demonstrated and the revised experiments in table 2 show that it can lead to improvements in accuracy.
train
[ "HyQ2gfD4G", "B1NpHn8EG", "rkLiPl9xz", "rknUpWqgz", "Hkh0HMjgM", "H1MugXp7M", "ByVfb76Qz", "ryxgaz6Qf", "ryqxiGTmG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for your comment.\n\nVariance reduction is a central issue in stochastic optimization, and countless papers have tried to address it. To summarize, lower variance enables faster convergence and hence improves the sample efficiency. We gave one reference above, but there are many more that we did not mention (a few more examples are [1-12]). Furthermore, the variance of the stochastic gradients is arguably the most serious problem facing policy gradient methods such as evolution strategies, and some fundamental algorithms like REINFORCE are essentially variance reduction methods. So hopefully the importance of variance reduction is clear.\n\nWe demonstrated consistently large variance reduction effects, and showed that in at least some cases, this leads to more efficient training (see Figure 2.a). In addition: 1) we show in Table 2 that flipout applied to DropConnect outperforms all the other methods by a significant margin (73.02 test perplexity, compared to 75.31 for the next-best method); and 2) we show in Appendix E.4 that flipout applied to DropConnect converges faster than DropConnect with shared masks (which is currently the SOTA method).\n\nRegarding when our method helps: most straightforwardly, it helps when SGD is suffering from high estimation variance. This turned out to be the case for some of the BNNs we experimented with, as well as for ES (which is notoriously high-variance). As we’ve pointed out, estimation variance will become a more serious bottleneck as hardware trends favor increasing batch sizes. These factors give simple rules of thumb for when flipout will be useful.\n\n[1] Andrew C. Miller et al. Reducing reparameterization gradient variance. In NIPS, 2017.\n[2] Alberto Bietti and Julien Mairal. Stochastic optimization with variance reduction for infinite datasets with finite sum structure. In NIPS, 2017.\n[3] Sashank J. Reddi et al. Stochastic variance reduction for nonconvex optimization. In ICML, 2016.\n[4] Soham De, Gavin Taylor, and Tom Goldstein. Variance reduction for distributed stochastic gradient descent. arXiv preprint arXiv:1512.01708, 2015.\n[5] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, 2014.\n[6] Aaron Defazio et al. Finito: A faster, permutable incremental gradient method for big data problems. In ICML, 2014.\n[7] Reza Harikandeh et al. Stop wasting my gradients: Practical SVRG. In NIPS, 2015.\n[8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.\n[9] Sashank J. Reddi et al. On variance reduction in stochastic gradient descent and its asynchronous variants. In NIPS, 2015.\n[10] Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In NIPS, 2012.\n[11] Chong Wang, Xi Chen, Alex J Smola, and Eric P Xing. Variance reduction for stochastic gradient optimization. In NIPS, 2013.\n[12] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014.", "I thank the authors for their response. I am disappointed that their revised paper does not provide any further explanation of why reducing the variance of SGD matters. The explanation from the authors in their response is that:\n* \"it is well established\": citing only one paper is not convincing, and even if it were so clearly well-established, your duty to your readers is to make the article's motivation as self-contained as possible.\n* \"that's why we use minibatches larger than 1\" : there's a world of diminishing returns between \"larger than 1\", which is indeed extreme, and \"1000+\"\n* \"it is an orthogonal problem\": I beg to differ. Knowing the setup in which your method can help is very much aligned with developing the method.\n\nAs such, after revision, it is with regret that I maintain my rating of \"6: Marginally above acceptance threshold\".", "The paper is well written. The proposal is explained clearly. \nAlthough the technical contribution of this work is relevant for network learning, several key aspects are yet to be addressed thoroughly, particularly the experiments. \n\nWill there be any values of alpha, beta and gamma where eq(8) and eq(9) are equivalent. In other words, will it be the case that SharedPerturbation(alpha, beta, gamma, N) = Flipout(alpha1, beta1, gamma1, N1) for some choices of alpha, alpha1, beta, beta1, ...? This needs to be analyzed very thoroughly because some experiments seem to imply that Flip and NoFlip are giving same performance (Fig 2(b)). \nIt seems like small batch with shared perturbation should be similar to large batch with flipout? \nWill alpha and gamma depend on the depth of the network? Can we say anything about which networks are better? \nIt is clear that the perturbations E1 and E2 are to be uniform +/-1. Are there any benefits for choosing non-uniform sampling, and does the computational overhead of sampling them depend on the network depth/size. \n\nThe experiments seem to be inconclusive. \nFirstly, how would the proposed strategy work on standard vision problems including learning imagenet and cifar datasets (such experiments would put the proposal into perspective compared to dropout and residual net type procedures) ?\nSecondly, without confidence intervals (or significance tests of any kind), it is difficult to evaluate the goodness of Flipout vs. baselines, specifically in Figures 2(b,d). \nThirdly, it is known that small batch sizes give better performance guarantees than large ones, and so, what does Figure 1 really imply? (Needs more explanation here, relating back to description of alpha, beta and gamma; see above). \n", "Typical weight perturbation algorithms (as used for e.g. Regularization, Bayesian NN, Evolution\nStrategies) suffer from a high variance of the gradient estimates. This is caused\nby sharing a weight perturbation by all training examples in a minibatch. More specifically\nsharing perturbed weights over samples in a minibtach induces correlations between gradients of each sample, which can\nnot be resolved by standard averaging. The paper introduces a simple idea, flipout, to\nperturb the weights quasi-independently within a minibatch: a base perturbation (shared\nby all sample in a minibatch) is multiplied by a random rank-one sign matrix (different\nfor every sample). Due to its special structure it is possible to vectorize this\nper-sample-operation such that only matrix-matrix products (as in the default forward\npropagation) are involved. The incurred computational cost is roughly twice as much\nas a standard forward propagation path. The paper also proves that this approach\nreduces the variance of the gradient estimates (and in practice, flipout should\nobtain the ideal variance reduction). In a set of experiments it is demonstrated\nthat a significant reduction in gradient variance is achieved, resulting\nin speedups for training time. Additionally, it is demonstrated that\nflipout allows evolution strategies utilizing GPUs.\n\nOverall this is a very nice paper. It clearly lays out the problem, describes\none solution to it and shows both theoretically as well as empirically\nthat the proposed solution is a feasable one. Given the increasing importance\nof Bayesian NN and Evolution Strategies, flipout is an important contribution.\n\nQuality: Overall very well written. Relevant literature is covered and an important\nproblem of current research in ML is tackled.\n\nClarity: Ideas/Reasons are clearly presented.\n\nSignificance: The presented work is highly significant for practical applicability\nof Bayesian NN and Evolution Strategies.", "In this article, the authors offer a way to decrease the variance of the gradient estimation in the training of neural networks.\nThey start in the Introduction and Section 2 by explaining the multiple uses of random connection weights in deep learning and how the computational cost often restricts their use to a single randomly sampled set of weights per minibatch, which results to higher-variance gradient estimatos than could be achieved otherwise. In Section 3 the authors offer to get the benefits of multiple weights without most of the cost, when the distribution of the weights is symmetric and fully factorized, by multiplying sampled-once random perturbations of the weights by a rank-1 random sign matrix. This efficient mechanism is only twice as costly as a single random perturbation, and the authors show how to efficiently parallelize it on GPUs, thereby also allowing GPU-ization of evolution strategies (something so far difficult toachieve). Of note, they provide a theoretical analysis in Section 3.2, proving the actual variance reduction of their efficient pseudo-sampling scheme. In Section 4 they provide quite varied empirical analysis: they confirm their theoretical results on four architectures; they show its use it to regularise on language models; they apply it on large minibatch settings where high variance is a main problem; and on evolution strategies.\n\nWhile it is a rather simple idea which could be summarised much earlier in the single equation (3), I really like the thoroughness and the clarity of the exposure of the idea. Too many papers in our community skimp on details and on formalism, and it is a delight to see things exposed so clearly -- even accompanied with a proof.\n\nHowever, the painful part: while I am convinced by the idea and love its detailed exposure, and the gradient variance reduction is made very clear, the experimental impact in terms of accuracy (or perplexity) is, sadly, not very convincing. Nowhere in the text did I find a clear rationale of why it is beneficial to reduce the variance of the gradient. The numerical results in Table 1 and Table 2 also do not show a clear improvement: Flipout does not provide the best accuracy. The gain in wall clock could be a factor, but would need to be measured on the figures more clearly. And the validation errors in Figure 2 for Evolution strategies seem to be worse than backprop.The main text itself also only claims performance “comparable to the other methods”. The only visible gain is on the lower part Figure 2.a on a ConvNet.\n\nThis makes me wonder if the authors could do a better job of putting forward the actual advantages of their methods on the end-results: could wall clock measure be put more forward, to justify the extra work? This would, in my mind, strongly improve the case for publication of this article.\n\n\nA few improvement suggestions:\n* Could put earlier more emphasis of superiority to Local Reparameterization Trick in terms of architecture, not wait until Section 2.2 and section 4.1\n*Should also put more emphasis on limitations, not wait until 3.1.\n* Proposition 1 is quite straightforward, not sure it deserves a proposition, but it’s elegant to put it forward.\n* Footnote 1 on re-using the matrices is indeed practical, but also somewhat surprising in terms of bias risks. Could it be explained in more depth, maybe by the random permutations of the minibatches making the bias non systematic and cancelling out?\n* Theorem 1: For readability could merge the expectations on the joint distribution as E_{x, \\hat \\delta W} , rather than separate expectations with the conditional distributions.\n* Theorem 1: could the authors provide a clearer intuitive explanation of the \\beta term alone, not only as part of \\alpha + \\beta, especially as it plays such a key role, being the only one that does not disappear? And how do they explain their empirical observation that \\beta is close to 0? Any intuition on that?\n* Experiments: I salute the authors for providing all the details in exhaustive manner in the Appendix. Very commendable.\n* Experiments: I like the empirical verification of the theory. Very neat to see.\n\nMinor typo:\n* page 2 last paragraph, “evolution strategies” is plural but the verbs are used in singular (“is black box”, “It doesn’t”, “generates”)\n", "Thank you for your careful and insightful feedback.\n\n-> Q: Why is it beneficial to reduce the variance of the gradient, flipout doesn’t provide the best accuracy, wall-clock time advantage?\n\nThe importance of variance reduction in SGD is well-established. Variance reduction is the whole reason for using batch sizes larger than 1, and some well-known works [1] have found that with careful implementation, the variance-reducing effect of large batches translates into a linear optimization speedup. Whether this relationship holds in a particular case depends on a whole host of configuration details which are orthogonal to our paper; e.g. the aforementioned paper had to choose careful initializations and learning rate schedules in order to achieve it. Still, we can confidently say there is no hope for unlocking the optimization benefits of large batches unless one uses some scheme (such as flipout) that enables variance reduction.\n\nNote that we do in fact observe significant optimization benefits (flipout converges 3X faster in terms of iterations), as shown in Figure 2(a). Additionally, although flipout is 2X more computationally expensive in theory, it can be implemented more efficiently in practice. For example, we can send each of the two matmul calls to a separate TPU chip, so they are done in parallel. Communication will add overhead but it shouldn’t be much, as it is only two chips communicating and the messages are matrices of size [batch_size, hidden_size] rather than the set of full weights.\n\nWith respect to the advantages offered by flipout for training LSTMs, we have conducted new experiments to compare regularization methods, and have updated Section 4.2 and the results in Table 2. We found that using flipout to implement DropConnect for recurrent regularization yields strong results, and significantly outperforms the other methods in both validation and test perplexity. For our original word-level LSTM experiments, we used the setup of [2], with a fixed learning schedule that decays the learning rate by a factor of 1.2 each epoch starting after epoch 6. In our new experiments, we decay the learning rate by a factor of 4 based on the nonmonotonic criterion introduced in [3]; the perplexities of all methods except the unregularized LSTM are reduced compared to the previous experiments. Using flipout to implement DropConnect allows us to use a different DropConnect mask per example in a batch efficiently (compared to [3], which shares the weights between all examples).\n\nWe also added Appendix E.4, which shows that using flipout with DropConnect yields significant variance reduction and faster training compared to using a shared DropConnect mask for all examples (as is done in [3]).\n\n-> Q: ES seems to be worse than backprop.\n\nWe’re not advocating for ES to replace backprop. The main comparison in this section is between NaiveES and FlipES; we show that FlipES behaves like NaiveES, but is more efficient due to parallelism. Our reason for including the backprop comparison is to show that this is an interesting regime to investigate. One might have thought that ES would hopelessly underperform backprop (since the latter uses gradients), but in fact FlipES turns out to be competitive. The reason this result is interesting is that unlike backprop, ES can also be applied to non-differentiable models.\n\n-> Q: Footnote 1 with bias risk.\n\nThe trick in Footnote 1 does not introduce any bias. Proposition 1 implies that the gradients are unbiased for any distribution over E which is independent of Delta W. This applies in particular to deterministic E (which is trivially independent), so E can be fixed throughout training. Such a scheme may not achieve the full variance reduction, but it is at least unbiased. Note that we do not use this trick in our experiments.\n\n-> Q: Why is it close to 0 in practice, intuitive explanation of beta term?\n\nBeta is the estimation variance when E is marginalized out. We’d expect this term to be much smaller than the full variance because it’s marginalizing over a symmetric perturbation distribution, so the perturbations in opposite directions should cancel. The finding that it was so close to zero was a pleasant surprise.\n\nWe also thank the reviewer for the suggestions for improvement. We will revise the final version to take them into account.\n\n\n[1] Goyal, Priya, et al. \"Accurate, large minibatch SGD: Training ImageNet in 1 hour.\" arXiv preprint arXiv:1706.02677 (2017)\n[2] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent\nneural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 1019–1027, (2016).\n[3] Merity, Stephen, Keskar, Nitish S., and Socher, Richard. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).", "Thank you for your careful and insightful feedback.\n\n--> Q: Will there be any values of alpha, beta and gamma where eq(8) and eq(9) are equivalent? Will alpha and gamma depend on the depth of the network? Can we say anything about which networks are better?\n\nMathematically, eqns. (8) and (9) are equivalent if alpha = 0, and they are nearly identical if beta or gamma dominates. However, we did not observe any examples of either case in any of our experiments. (In fact, beta was indistinguishable from 0 in all of our experiments.) Based on Figure 1, the values seem fairly consistent between very different architectures.\n\n--> Q: FlipES doesn’t outperform NaiveES in Figure 2.\n\nHere, NaiveES refers to fully independent perturbations, rather than a single shared perturbation. Hence, it is an upper bound on how well FlipES should perform, and indeed FlipES achieves this with a much faster wall-clock time (see Fig. 5 in the appendix, cpuES corresponds to noFlip). For clarity, we will rename NaiveES to be IdealES in the final version.\n\n--> Q: Shared perturbation with small batch should be similar to large batch with flipout?\n\nThe reason to train with large batches is to take advantage of parallelism. (If we only cared about the number of arithmetic operations, we’d all use batch size 1.) The size of this benefit depends on the hardware, and hardware trends (more GPU cores, TPUs) strongly favor increasing batch sizes. Currently, one may sometimes be able to use small batches to compensate for the inefficiency of shared perturbations, but this is a band-aid which won’t remain competitive much longer.\n\n--> Q: Can we use non-uniform E1, E2 and does the computational overhead of sampling depend on the network depth?\n\nYes, Proposition 1 certainly allows for non-uniform E1 and E2, although the advantage of this is unclear. In principle, sampling E1 and E2 ought not to be very expensive compared to the matrix multiplications. However, the overhead can be significant if the framework implements it inefficiently; in this case, one can use the trick in Footnote 1.\n\n--> Q: Will the proposed strategy work on standard vision problems including ImageNet and CIFAR?\n\nOur experiments include CIFAR-10, and we see no reason why flipout shouldn’t work on ImageNet. Weight perturbations are not currently widely used in vision tasks, but if that changes, flipout ought to be directly applicable. Our experiments focus on Bayesian neural nets and ES, which inherently require weight perturbations.\n\nAdditionally, it was shown that DropConnect (which is a special case of weight perturbation, as we show in Sec. 2.1) regularizes LSTM-based word language models and achieves SOTA on several tasks [1]. Flipout can be directly applied to it, and we show in Appendix E.4 that flipout reduces the stochastic gradient variance compared to [1].\n\n--> Q: Small batch sizes give better performance, what does Fig. 1 imply?\n\nWe’re not sure what you mean by this. Due to the variance reduction effects of large batches, one typically uses as large a batch as will fit on the GPU, and sometimes resorts to distributed training in order to use even larger batches. (A batch size of 1 is optimal if you only count the number of iterations, but this isn’t a realistic model, even on a single CPU.)\n\n[1] Merity, Stephen, Keskar, Nitish S., and Socher, Richard. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).", "We thank the reviewers for their helpful comments.\n\nWe updated the regularization experiments in Section 4.2, including the results in Table 2. We show that flipout applied to DropConnect outperforms all the other methods we investigated.\n\nWe also added Appendix E.4, in which we show that for large-batch LSTM training: 1) using DropConnect with flipout achieves significant variance reduction compared to using a shared DropConnect mask for all examples; and 2) DropConnect with flipout converges faster than DropConnect with shared masks, showcasing the optimization benefits of using flipout.", "Thank you for your positive comments and for recognizing the work!" ]
[ -1, -1, 6, 8, 6, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "B1NpHn8EG", "H1MugXp7M", "iclr_2018_rJNpifWAb", "iclr_2018_rJNpifWAb", "iclr_2018_rJNpifWAb", "Hkh0HMjgM", "rkLiPl9xz", "iclr_2018_rJNpifWAb", "rknUpWqgz" ]
iclr_2018_r1l4eQW0Z
Kernel Implicit Variational Inference
Recent progress in variational inference has paid much attention to the flexibility of variational posteriors. One promising direction is to use implicit distributions, i.e., distributions without tractable densities as the variational posterior. However, existing methods on implicit posteriors still face challenges of noisy estimation and computational infeasibility when applied to models with high-dimensional latent variables. In this paper, we present a new approach named Kernel Implicit Variational Inference that addresses these challenges. As far as we know, for the first time implicit variational inference is successfully applied to Bayesian neural networks, which shows promising results on both regression and classification tasks.
accepted-poster-papers
Thank you for submitting you paper to ICLR. This paper was enhanced noticeably in the rebuttal period and two of the reviewers improved their score as a result. There is a good range of experimental work on a number of different tasks. The addition of the comparison with Liu & Feng, 2016 to the appendix was sensible. Please make sure that the general conclusions drawn from this are explained in the main text and also the differences to Tran et al., 2017 (i.e. that the original model can also be implicit in this case).
train
[ "Hk8v9J5eM", "rkzduZIyf", "S1jTB1PlM", "SkExYk0QG", "BkyjKpa7M", "S1wm3mKmf", "S1UY9z47G", "HkKijfE7G", "rJVK3zEmf", "Hkmf7Nukz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Update: I read the other reviews and the authors' rebuttal. Thanks to the authors for clarifying some details. I'm still against the paper being accepted. But I don't have a strong opinion and will not argue against so if other reviewers are willing. \n\n------\n\nThe authors propose Kernel Implicit VI, an algorithm allowing implicit distributions as the posterior approximation by employing kernel ridge regression to estimate a density ratio. Unlike current approaches with adversarial training, the authors argue this avoids the problems of noisy ratio estimation, as well as potentially high-dimensional inputs to the discriminator. The work has interesting ideas. Unfortunately, I'm not convinced that the method overcomes these difficulties as they argue in Sec 3.2.\n\nAn obvious difficulty with kernel ridge regression in practice is that its complete inaccuracy to estimate high-dimensional density ratios. This is especially the case given a limited number of samples from both p and q (which is the same problem as previous methods) as well as the RBF kernel. While the RBF kernel still takes the same high-dimensional inputs and does not involve learning massive sets of parameters, it also does not scale well at all for accurate estimation. This is the same problem as related approaches with Stein variational gradient descent; namely, it avoids minimax problems as in adversarial training by implicitly integrating over the discriminator function space using the kernel trick.\n\nThis flaw has rather deep implications. For example, my understanding of the implicit VI on the Bayesian neural network in Sec 4 is that it ends up as cross-entropy minimization subject to a poorly estimated KL regularizer. I'd like to see just how much entropy the implicit approximation has instead of concnetrating toward a point; or more directly, what the implicit posterior approximation looks like compared to a true posterior inferred by, say, HMC as the ground truth. This approach also faces difficulties that the naive Gaussian approximation applied to Bayesian neural nets does not: implicit approximations cannot exploit the local reparameterization trick and are therefore limited to specific architectures that does not involve sampling very large weight matrices.\n\nThe authors report variational lower bounds, which I'm not sure is really a lower bound. Namely, the bias incurred by the ratio estimation makes it difficult to compare numbers. An obvious but very illustrative experiment I'd like to see would be the accuracy of the KL estimator on problems where we can compute it tractably, or where we can Monte Carlo estimate it very well under complicated but tractable densities. I also suggest the authors perform the experiment suggested above with HMC as ground truth on a non-toy problem such as a fairly large Bayesian neural net.", "This paper presents Kernel Implicit Variational Inference (KIVI), a novel class of implicit variational distributions. KIVI relies on a kernel approximation to directly estimate the density ratio. Importantly, the optimal kernel approximation in KIVI has closed-form solution, which allows for faster training since it avoids gradient ascent steps that may soon get \"outdated\" as the optimization over the variational distribution runs. The paper presents experiments on a variety of scenarios to show the performance of KIVI.\n\nUp to my knowledge, the idea of estimating the density ratio using kernels is novel. I found it interesting, specially since there is a closed-form solution for this estimate. The closed form solution involves a matrix inversion, but this shouldn't be an issue, as the matrix size is controlled by the number of samples, which is a parameter that the practitioner can choose. I also found interesting the implicit MMNN architecture proposed in Section 4.\n\nThe experiments seem convincing too, although I believe the paper could probably be improved by comparing with other implicit VI methods, such as [Liu & Feng], [Tran et al.], or others.\n\nMy major criticism with the paper is the quality of the writing. I found quite a few errors in every page, which significantly affects readability. I strongly encourage the authors to carefully review the entire paper and search for typos, grammatical errors, unclear sentences, etc.\n\nPlease find below some further comments broken down by section.\n\nSection 1: In the introduction, it is unclear to me what \"protect these models\" means. Also, in the second paragraph, the authors talk about \"often leads to biased inference\". The concept to \"biased inference\" is unclear. Finally, the sentence \"the variational posterior we get in this way does not admit a tractable likelihood\" makes no sense to me; how can a posterior admit (or not admit) a likelihood?\n\nSection 3: The first paragraph of the KIVI section is also unclear to me. In Section 3.1, it looks like the cost function L(\\hat(r)) is different from the loss in Eq. 1, so it should have a different notation. In Eq. 4, I found it confusing whether L(r)=J(r). Also, it would be nice to include a brief description of why the expectation in Eq. 4 is taken w.r.t. p(z) instead of q(z), for those readers who are less familiar with [Kanamori et al.]. Finally, the motivation behind the \"reverse ratio trick\" was unclear to me (the trick is clear, but I didn't fully understand why it's needed).\n\nSection 4: The first paragraph of the example can be improved with a brief discussion of why the methods of [Mescheder et al.] and [Song et al.] \"are nor applicable\". Also, the paragraph above Eq. 11 (\"When modeling a matrix...\") was unclear to me.\n\nSection 6: In Figure 1(a), I think there must be something wrong, because it is well-known that VI tends to cover one of the modes of the posterior only due to the form of the KL divergence (in contrast to EP, which should look like the curve in the figure). Additionally, Figure 3(a) (and the explanation in the text) was unclear to me. Finally, I disagree with the discussion regarding overfitting in Figure 3(b): that plot doesn't show overfitting because it is a plot of the training loss (and overfitting occurs on test); instead it looks like an optimization issue that makes the bound decrease.\n\n\n**** EDITS AFTER AUTHORS' REBUTTAL ****\n\nI increased the rating to 7 after reading the revised version.\n", "Thank you for the feedback, and I think many of my concerns have been addressed.\n\nI think the paper should be accepted.\n\n==== original review ====\n\nThank you for an interesting read. \n\nApproximate inference with implicit distribution has been a recent focus of the research since late 2016. I have seen several papers simultaneously proposing the density ratio estimation idea using GAN approach. This paper, although still doing density ratio estimation, uses kernel estimators instead and thus avoids the usage of discriminators. \n\nFurthermore, the paper proposed a new type of implicit posterior approximation which uses intuitions from matrix factorisation. I do think that another big challenge that we need to address is the construction of good implicit approximations, which is not well studied in previous literature (although this is a very new topic). This paper provides a good start in this direction.\n\nHowever several points need to be clarified and improved:\n1. There are other ways to do implicit posterior inference such as amortising deterministic/stochastic dynamics, and approximating the gradient updates of VI. Please check the literature.\n2. For kernel based density ratio estimation methods, you probably need to cite a bunch of Sugiyama papers besides (Kanamori et al. 2009). \n3. Why do you need to introduce both regression under p and q (the reverse ratio trick)? I didn't see if you have comparisons between the two. From my perspective the reverse ratio trick version is naturally more suitable to VI.\n4. Do you have any speed and numerical issues on differentiating through alpha (which requires differentiating K^{-1})?\n5. For kernel methods, kernel parameters and lambda are key to performances. How did you tune them?\n6. For the celebA part, can you compute some quantitative metric, e.g inception score?\n", "We thank the reviewers for all the comments and questions. Here we summarize the major changes made in the revision.\n\n* We revised the statements of motivations and contributions in Section 1-3 to make them clearer.\n* We added Appendix F.4 to compare the true KL term with the estimated KL term under “complicated but tractable densities” (normalizing flows).\n* Comparisons with KSD VI (Liu & Feng, 2016) and HMC are added to Appendix F.2.\n* We added Appendix F.3 to visualize the posterior approximation by KIVI and compare with HMC and VI with naive Gaussian posteriors.\n* We revised the reverse ratio trick part and added a comparison between estimation with and without the trick in Appendix F.1.\n* The related work section is extended to include the works pointed out by AnonReviewer3.\n* In Section 6.3, we added quantitative evaluation for CelebA using the recently developed Fréchet Inception Distance (FID) (Heusel et al., 2017).\n* We corrected the error in Figure 1(a).\n* We fixed other typos, grammar errors, and unclear sentences.\n", "Thank you for the updated review. We answer the further questions below.\n\nQ: \"Note that the ratio approximation ...\" is not clear:\nA: This sentence has the same meaning as the one below Eq. 7, by which we mean that the true gradients of the KL term w.r.t. $\\phi$ do not flow through the density ratio function, so we could replace the ratio function with its estimate who has zero gradients w.r.t. $\\phi$. We made it clearer in the paper.\n\nQ: Other typos:\nA: We uploaded a new revision correcting them.", "Thank you for revising the paper; I think it is much clearer now. The issues in my initial review have been appropriately taken care of.\n\nThere are still some typos here and there, and I would recommend the authors to carefully revise the paper again. Some examples are:\n\nSection 2:\n. there has been some works --> there HAVE\n. Note that the ratio approximation ... --> this sentence is unclear to me, do you mean that the gradient of the ratio approximation is zero once the approximation is accurate? Same comment in Sec 3.1, below Eq. 7.\n. doesn't --> does not (same goes for it's, won't, etc., in other sections)\n\nSection 3:\n. Why does notation change from q_phi(z) to just q(z)?\n. Substitute Eq. (5) --> SubstitutING Eq. (5) and SETTING the derivatives ... TO zero, we ...", "Thank you for the insightful comments and we have included further experiments to investigate the questions raised. We have revised the paper to include the analysis.\n\nQ1: Inaccuracy of kernel regression in high dimensions & not convinced that KIVI overcomes the difficulties:\n\nFirst, we have to emphasize that implicit VI is surely a much harder problem than VI with a common variational posterior (e.g., Gaussian), due to the lack of a tractable density for variational posterior q. Given the limited number of samples from q per iteration, if no additional knowledge is available, almost all implicit VI methods as well as nonparametric methods (e.g., SVGD) suffer to some degree in high dimensions, as agreed by the reviewer. However, as we extensively investigated in experiments, though not fully addressed all the challenges, KIVI can outperform existing strong competitors to get state-of-the-art performance. We think this is a valuable contribution to variational inference.\n\nBelow, we further clarify our contributions. We have also revised the two challenges in Section 2 and the statements of contributions in the paper to make them clearer.\n\n1) For the noisy estimation, we focused on the variance introduced in discriminator-based methods. In fact, existing discriminator-based methods have been identified to have high variance (noisy), i.e., samples from the two distributions are easily discriminated, which indicates overfitting (Mescheder et al., 2017). This phenomenon is like the case when you push $\\lambda$ in KIVI to 0. We are not claiming high accuracy for estimation in high-dimensional spaces (In fact no implicit VI method can claim that with limited samples per iteration, as explained above). One main contribution of KIVI is to provide an explicit trade-off between bias and variance, since there was no principled way of doing so in discriminator-based methods. As a result, our algorithm can be rather stable (see Fig.2, right). It’s true that bringing down the variance requires to pay some bias in the gradient in general. However, as empirically shown in the experiments and also in the investigation of the learned posteriors (see the answer to Q3 below), we found that we still gain over previous VI methods, both in terms of accuracy and also the quality of uncertainty, which is highly non-trivial.\n\n2) For high-dimensional latent variables, the argument mainly focused on computation issues. The other main contribution of KIVI is to make implicit VI computationally FEASIBLE for models like moderate-sized BNNs. In the classification case, the weights are of tens of thousands of dimensions and can hardly be fed into neural nets, which renders discriminator-based approaches infeasible.\n\nFinally, we’d like to add a point that KIVI opens up the door for improving implicit VI methods. The view of kernel regression at least brings two possible directions: One is pointed out by the reviewer, the RBF kernel could be replaced by other kernels that are more suitable to the model here. The other is to improve the regression problem to utilize the geometry of the distribution. And the latter is actually an ongoing work of us.\n\nQ2: Accuracy of the KL estimator on problems where we can compute it tractably:\nThanks for the suggestion. We added Appendix F.4 to compare the true KL term with the estimated KL term. We used normalizing flow there as the “complicated but tractable densities”. We can see that the KL estimates closely track the ground truth, and are more accurate as the variational approximation improves over time.\n\nQ3: Quality of posterior approximation & comparison to HMC:\nWe added Appendix F.3 to visualize the posterior approximation by KIVI and compare with HMC and the VI with naive Gaussian posteriors. The quantitative results and settings of HMC are described in Appendix F.2. The main conclusion is that the VI with naive Gaussian posteriors leads to over-pruning problems. KIVI doesn’t have the problem, and retains a good amount of uncertainty compared to HMC.\n\nQ4: Cannot use local reparameterization trick:\nThis is a valid point. But the problem exists as long as we want to go beyond tractable variational posteriors (e.g., Gaussian). The results by naive Gaussian posteriors have been shown above, which has significant over-pruning problems. New difficulty introduced shouldn’t be the reason that we stick to the naive Gaussian approximation.\n\nQ5: Bias of lower bounds:\nThere are two places where we report lower bounds. In Figure 2 (right) the lower bounds are used only to show the stability of training. In Figure 3(b) lower bounds are plotted to show the overfitting problems. We argue that though the lower bounds have bias, their relative gap (the training/test gap) should be comparable. Moreover, in this case we have also evaluated the test log likelihoods using golden truths estimated by Annealed Importance Sampling (AIS). The results by AIS confirmed the conclusion that the KIVI-trained VAE less overfits.", "Thank you for the positive feedback. We address the individual questions below.\n\nQ1: Related works:\nThanks for the suggestion. We have cited a paper on amortizing the deterministic dynamics of SVGD (Liu & Feng, 2016). In the revision, we added two more recent papers on amortized MCMC (Li et al., 2017) and gradient estimators of implicit models (Li & Turner, 2017) in Section 5. We also added more content there to highlight the contributions that Sugiyama and his collaborators has made to density ratio estimation.\n\nQ2: On the reverse ratio trick:\nIn fact, we didn’t do regression under q. We only adopted the regression under p (the reverse ratio trick) in our experiments (See Algo. 1). And we have explained why the reverse ratio version is more suitable for VI in Section 3.1. In the revision, we further added a comparison between the two using the 2-D Bayesian logistic regression example in Appendix F.1, which shows that the trick is very essential for KIVI to work well.\n\nQ3: Speed and numerical issues on differentiating through alpha:\nBecause K is of size n_p x n_p (n_p is the number of samples), which is usually of tens or a hundred, the cost of differentiating through K^{-1} is not high. And we used the automatic differentiation in Tensorflow. We didn’t observe any numerical issues, as long as the regularization parameter isn’t extremely small, say, less than 1e-7.\n\nQ4: Tuning parameters:\nAs mentioned in Section 3.1, we selected the kernel bandwidth by the commonly used median heuristic, i.e., the kernel bandwidth is chosen as the median of pairwise distances between the samples.\n\nAs for lambda, it has clear meaning, which controls the balance between bias and variance. So a good criterion would be tuning it to achieve a good trade-off between the aggressiveness of the estimate and stability of training. In the toy experiments, we tuned lambda so that optimizing only the KL term will make the posterior samples more disperse like the prior. In most other experiments, lambda is set at 0.001 which has good performance, though it could be improved by cross-validation.\n\nQ5: Quantitative evaluation for CelebA:\nThanks for the suggestion. In fact, inception score is only suitable to natural image datasets like Cifar10 and ImageNet. Instead, we adopted a recently developed quantitative measure named Fréchet Inception Distance (FID) (Heusel et al., 2017), which improved the Inception score to use the statistics of real world samples. The scores achieved at epoch 25 by AVB and KIVI are 160 and 41 (smaller is better), respectively. We added these results in Section 6.3.", "Thank you for the detailed comments. We apologize for the typos and errors. We have corrected them and revised the unclear sentences. Below, we address the individual concerns.\n\nQ1: Comparisons with other implicit VI methods, such as [Liu & Feng], [Tran et al.], or others:\nThanks for the suggestion. In the revision, we added the comparison with (Liu & Feng, 2016) in Appendix F.2. Their approach is to directly minimize the kernel Stein discrepancy (KSD) between the variational posterior and the true posterior. Since KSD has been shown to be the magnitude of a functional gradient of KL divergence (Liu & Wang, 2016), all saddle points in the original problem of optimizing KL divergence will become local optima when optimizing KSD. In experiments we also found that KSD VI soon converges to local minima, where the performance is unsatisfying.\n\nFor (Tran et al., 2017), as it investigates both implicit models and implicit inference, the technique used is the joint-contrastive method, which is beyond our scope (only meaningful to use joint-contrastive when the model is also implicit). So the comparison is infeasible since we are only focusing on implicit inference. We have compared to other discriminator-based approaches in our experiments (e.g., prior-contrastive, AVB).\n\nQ2: Detailed comments by section:\nSection 1: We revised all the unclear statements. “biased inference” means the true posterior is far from the variational family when the family only includes factorized distributions. “admit a tractable likelihood” should be “have a tractable density”.\n\nSection 3: We revised the unclear statements. In Section 3.1, we cleaned the notations and added the description of why the expectation in Eq.4 is taken w.r.t. p(z). We also revised the reverse ratio trick part. A comparison between estimation with and without the trick is added to Appendix F.1.\n\nSection 4: The implicit distributions introduced in [Mescheder et al.] and [Song et al.] are not applicable because they are based on traditional fully-connected neural networks, which cannot afford a very large output space. However, this is indeed the case of the distribution over weights in a normal-size BNN. We made it clearer in the paper. The paragraph above the original Eq. (11) has been revised.\n\nSection 6: Thanks for pointing out the error in Figure 1(a). We have corrected it. VI with normal posterior indeed converges to a single mode. For Figure 3(a), we made it clearer and added more detailed descriptions to the posterior. Figure 3(b) did show overfitting, where we have plotted both the training and the test loss. The smaller their gap is, the less the model overfits. We added more descriptions in Section 6.3.", "Figure 1(a) in the toy experiment is incorrectly drawn and thus misinterpreted. The correct figure should be that the Gaussian posterior covers the left mode instead of being between the two modes, since it is initialized from left (see https://drive.google.com/file/d/1nJAVH2-Fl0P6ei-ZwBI3_Z6BvFFAYk9E/view?usp=sharing). The figure of KIVI is correct and we double-checked all the others. We sincerely apologize for this error and will fix it in the revision." ]
[ 5, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1l4eQW0Z", "iclr_2018_r1l4eQW0Z", "iclr_2018_r1l4eQW0Z", "iclr_2018_r1l4eQW0Z", "S1wm3mKmf", "rJVK3zEmf", "Hk8v9J5eM", "S1jTB1PlM", "rkzduZIyf", "iclr_2018_r1l4eQW0Z" ]
iclr_2018_Skdvd2xAZ
A Scalable Laplace Approximation for Neural Networks
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
accepted-poster-papers
This paper gives a scalable Laplace approximation which makes use of recently proposed Kronecker-factored approximations to the Gauss-Newton matrix. The approach seems sound and useful. While it is a rather natural extension of existing methods, it is well executed, and the ideas seem worth putting out there.
train
[ "HkLWn48Ef", "HJM4Z-IVz", "ByVD9ZdxG", "rJ_9qMuef", "rJ7aicdgM", "BJF6oL6mM", "S1FDsITXf", "r1Kqi8pQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Many thanks for writing your rebuttal and for adding experiments on variational inference with fully factorized posterior. I believe that these comparisons add value to the proposal, given that the proposed approach achieves better performance. I'm keen to raise my score due to that, although I still think that the novelty can be an issue. ", "Thank you for the new figures which along with those in the appendix clarify the significant role of regularization in the empirical use of the approximation.\n\nThank you also for your discussion of the Louizos and Welling approximation - the comparison to fully factored Gaussian instead is informative.\n\nOverall I will be keeping my score the same. ", "This paper proposes a novel scalable method for incorporating uncertainty estimate in neural networks, in addition to existing methods using, for example, variational inference and expectation propagation. The novelty is in extending the Laplace approximation introduced in MacKay (1992) using a Kronecker-factor approximation of the Hessian. The paper is well written and easy to follow. It provides extensive references to related works, and supports its claims with convincing experiments from different domains.\n\nPros:\n-A novel method in an important and interesting direction.\n-It is a prediction method, so can be applied on existing trained neural networks (however, see the first con).\n-Well-written with high clarity.\n-Extensive and convincing experiments.\n\nCons:\n-Although it is a predictive method, it's still worth discussing how this method relates to training. For example, I suspect it works better when the model is trained with second-order method, as the resulting Taylor approximation (eq. 2) of the log-likelihood function might have higher quality when both terms are explicitly used in optimisation.\n-The difference between using KFAC and KFRA is unclear, or should be better explained if they are identical in this context. Botev et al. 2017 reports they are slightly different in approximating the Gaussian Newton matrix.\n-Acronyms, even well-known, are better defined before using (e.g., EP, PSD).\n-Need more details of the optimisation method used in experiments, especially the last one.", "This paper proposes a Laplace approximation to approximate the posterior distribution over the parameters of deep networks. \n\nThe idea is interesting and the realization of the paper is good. The idea builds upon previous work in scalable Gauss-Newton methods for optimization in deep networks, notably Botev et al., ICML 2017. In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017. The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations.\n\nHaving said that, the experimental evaluation is quite interesting and in-depth. I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning. This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference. \n\nI believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy. Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers. I wonder how these results can be generalized to relax the independence assumption. \n\n", "This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors. The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature. Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case. A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true. The necessary amount of regularisation can be large as reported in section 5.4.\n\nThe paper is generally well written. In particular the mathematical exposition attains good clarity. Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works. The paper is generally well referenced. \n\nGiven the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior). The size of model and data would not be prohibitive for either of these methods in this instance. All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method. \n\nIt would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions. It is good that the paper includes a modern architecture with a more challenging dataset. It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this. I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution. This complements experiments where there is dataset shift and we wish to show robustness. I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion.\n", "Thank you very much for your positive review, we have updated the manuscript to introduce all acronyms before using them and added details regarding the hyperparameters of the last experiment to the appendix.\n\nRegarding how the optimisation method affects the Laplace approximation is a question that we believe is closely related to how the optimisation method affects generalisation. We therefore decided to simply go with an optimiser that is commonly used in practice to make our results relevant to those who might use our method, however we are definitely open to adding an empirical comparison with different optimisation methods to a camera-ready version of the paper. Answering this question in full generality seems like a very interesting, but challenging open research problem.", "Thank you very much for your thoughts and suggestions.\n\n\n>Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.\n\nWe will make the code available after the review period for ICLR. It is unfortunately spread out across multiple repositories, which we haven't open sourced yet, in particular we have integrated the calculation of the curvature factors that would also be needed for KFAC/KFRA into a currently internal version of Lasagne, so it would have been tricky to ensure that everything is fully anonymised.\n\n\n> There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation\n\nThank you for pointing this out, we have added the corresponding figures to the manuscript and expanded the section. We have moved the figures of the unregularised Laplace approximations into the appendix and put figures for the regularised one into the main text, as they give a better fit to the HMC posterior.\n\n\n>It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions.\n\nWe have added a comparison to a fully factorised Gaussian approximation as in Graves (2011) and Blundell et al. (2015) as this was also suggested by Reviewer 2. We attempted to train a network with an approximate matrix normal posterior as in Louizos & Welling (2016) by parameterising the Cholesky factors of the two covariance matrices, as this would most closely correspond to how the posterior is approximated by our Laplace approximation. However, this lead to poor classification accuracies and the authors confirmed that this approach wasn't successful for them either. They stated that instead the pseudo data ideas from the GP literature were crucial for the success of their method.", "Thank you very much for your comments and your review. We will address a few specific points that you raised in the following:\n\n\n>In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017. The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations.\n\nWe fully agree that, from a techincal perspective, the approximation to the Hessian is not new and that once the two Kronecker factors are calculated it is relatively straightforward (in terms of implementation) to calculate the approximate predicive mean for a network. However, we do think that introducing these ideas from the optimisation literature to the Bayesian deep learning community, demonstrating how the Laplace approximation can be scaled to neural networks, is indeed a novel and valuable contribution (since the diagonal approximation is not sufficient as shown in our experiments and the full approximation is not feasible). The Laplace approximation fundamentelly differs from the currently popular variational inference approaches in not requiring a modification to the training procedure, which is extremely useful for practictioners as they can simply apply it to their exisiting networks/do not need to do a full new hyperparameter search for optimising the parameters of an approximate posterior.\n\n\n>I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning. This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference. \n\nWe have added this baseline to the 2nd and 3rd experiment, as this was also requested by Reviewer 1 (our original aim was to have a \"clean\" comparison that is independent of the optimisation objective/procedure by focusing on different prediction methods for an identical network). \n\n\n>I believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy. Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers. I wonder how these results can be generalized to relax the independence assumption.\n\nThank you for this suggestion. Indeed, the layerwise blocks of the Fisher and Gauss-Newton are all Kronecker factored, so it should be possible to include the covariance of e.g. neighbouring layers in a computationally efficient way. In their work on KFAC, Martens & Grosse investigated such a tri-diagonal block approximation of the Fisher, however this only gave a minor improvment in performance over the block-diagonal approximation. Yet, since optimisation is a lot more time-critical, this could be worth investigating in the future for the Laplace approximation." ]
[ -1, -1, 9, 6, 6, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1 ]
[ "r1Kqi8pQG", "S1FDsITXf", "iclr_2018_Skdvd2xAZ", "iclr_2018_Skdvd2xAZ", "iclr_2018_Skdvd2xAZ", "ByVD9ZdxG", "rJ7aicdgM", "rJ_9qMuef" ]
iclr_2018_B1IDRdeCW
The High-Dimensional Geometry of Binary Neural Networks
Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of work to explain why one can effectively capture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated good classification performance with BNNs, our work explains why these BNNs work in terms of HD geometry. Furthermore, the results and analysis used on BNNs are shown to generalize to neural networks with ternary weights and activations. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks.
accepted-poster-papers
This paper analyzes mathematically why weights of trained networks can be replaced with ternary weights without much loss in accuracy. Understanding this is an important problem, as binary or ternary weights can be much more efficient on limited hardware, and we've seen much empirical success of binarization schemes. This paper shows that the continuous angles and dot products are well approximated in the discretized network. The paper concludes with an input rotation trick to fix discretization failures in the first layer. Overall, the contribution seems substantial, and the reviewers haven't found any significant issues. One reviewer wasn't convinced of the problem's importance, but I disagree here. I think the paper will plausibly be helpful for guiding architectural and algorithmic decisions. I recommend acceptance.
train
[ "SkhtYrwef", "Ske7rLdeM", "ryDPFMKef", "rJNQu_zVf", "HJGnzbfVG", "B1kLURtzG", "rkeGHyKMG", "B1OG_Y_GM", "rkj_PefMG", "SyzXDgfff", "Sy1bHgMGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "author", "author", "author" ]
[ "This paper investigates numerically and theoretically the reasons behind the empirical success of binarized neural networks. Specifically, they observe that:\n\n(1) The angle between continuous vectors sampled from a spherical symmetric distribution and their binarized version is relatively small in high dimensions (proven to be about 37 degrees when the dimension goes to infinity), and this demonstrated empirically to be true for the binarized weight matrices of a convenet.\n\n(2) Except the first layer, the dot product of weights*activations in each layer is highly correlated with the dot product of (binarized weights)*activations in each layer. There is also a strong correlation between (binarized weights)*activations and (binarized weights)*(binarized activations). This is claimed to entail that the continuous weights of the binarized neural net approximate the continuous weights of a non-binarized neural net trained in the same manner.\n\n(3) To correct the issue with the first layer in (2) it is suggested to use a random rotation, or simply use continues weights in that layer.\n\nThe first observation is interesting, is explained clearly and convincingly, and is novel to the best of my knowledge.\n\nThe second observation is much less clear to me. Specifically,\na.\tThe author claim that “A sufficient condition for \\delta u to be the same in both cases is L’(x = f(u)) ~ L’(x = g(u))”. However, I’m not sure if I see why this is true: in a binarized neural net, u also changes, since the previous layers are also binarized. \nb.\tRelated to the previous issue, it is not clear to me if in figure 3 and 5, did the authors binarize the activations of that specific layer or all the layers? If it is the first case, I would be interested to know the latter: It is possible that if all layers are binarized, then the differences between the binarized and non-binarized version become more amplified.\nc.\tFor BNNs, where both the weights and activations are binarized, shouldn’t we compare weights*activations to (binarized weights)*(binarized activations)?\nd.\tTo make sure, in figure 4, the permutation of the activations was randomized (independently) for each data sample? If not, then C is not proportional the identity matrix, as claimed in section 5.3.\ne.\tIt is not completely clear to me that batch-normalization takes care of the scale constant (if so, then why did XNOR-NET needed an additional scale constant?), perhaps this should be further clarified. \n\nThe third observation seems less useful to me. Though a random rotation may improve angle preservation in certain cases (as demonstrated in Figure 4), it may hurt classification performance (e.g., distinguishing between 6 and 9 in MNIST). Furthermore, since it uses non-binary operations, it is not clear if this rotation may have some benefits (in terms of resource efficiency) over simply keeping the input layer non-binarized.\n\nTo summarize, the first part is interesting and nice, the second part was not clear to me, and the last part does not seem very useful. \n\n%%% After Author's response %%%\na. My mistake. Perhaps it should be clarified in the text that u are the weights. I thought that g(u) is a forward propagation function, and therefore u is the neural input (i.e., pre-activation).\n\nFollowing the author's response and revisions, I have raised my grade.\n", "This paper presents three observations to understand binary network in Courbariaux, Hubara et al. (2016). My main concerns are on the usage of the given observations. \n\n1. Can the observations be used to explain more recent works?\n\nIndeed, Courbariaux, Hubara et al. (2016) is a good and pioneered work on the binary network. However, as the authors mentioned, there are more recent works which give better performance than this one. For example, we can use +1, 0, -1 to approximate the weights. Besides, [a] has also shown a carefully designed post-processing binary network can already give very good performance. So, how can the given observations be used to explain more recent works?\n\n2. How can the given observations be used to improve Courbariaux, Hubara et al. (2016)?\n\nThe authors call their findings theory. From this perspective, I wish to see more mathematical analysis rather than just doing experiments and showing some interesting observations. Besides, giving interesting observations is not good enough. I wish to see how they can be used to improve binary networks.\n\nReference\n[a]. Network sketching: exploiting binary structure in deep CNNs. CVPR 2017", "This paper tries to analyze the effectiveness of binary nets from a perspective originated from the angular perturbation that binarization process brings to the original weight vector. It further explains why binarization is able to preserve the model performance by analyzing the weight-activation dot product with \"Dot Product Proportionality Property.\" It also proposes \"Generalized Binarization Transformation\" for the first layer of a neural network.\n\nIn general, I think the paper is written clearly and in detail. Some typos and minor issues are listed in the \"Cons\" part below.\n\nPros:\nThe authors lead a very nice exploration into the binary nets in the paper, from the most basic analysis on the converging angle between original and binarized weight vectors, to how this convergence could affect the weight-activation dot product, to pointing out that binarization affects differently on the first layer. Many empirical and theoretical proofs are given, as well as some practical tricks that could be useful for diagnosing binary nets in the future.\n\nCons:\n* it seems that there are quite some typos in the paper, for example:\n 1. Section 1, in the second contribution, there are two \"then\"s.\n 2. Section 1, the citation format of \"Bengio et al. (2013)\" should be \"(Bengio et al. 2013)\".\n* Section 2, there is an ordering mistake in introducing Han et al.'s work, DeepComporession actually comes before the DSD. \n* Fig 2(c), the correlation between the theoretical expectation and angle distribution from (b) seems not very clear.\n* In appendix, Section 5.1, Lemma 1. Could you include some of the steps in getting g(\\row) to make it clearer? I think the length of the proof won't matter a lot since it is already in the appendix, but it makes the reader a lot easier to understand it.\n", "As far as your second comment is concerned, the possibility of scaling the continuous weights is already taken into account in the theory. First, the angle preservation property looks at the angle between the continuous weight and the binary weight, which is independent of the scaling of the continuous weight. Second, the dot product *proportionality* property notes that the continuous weight activation dot product is proportional to the binary weight activation dot product. If the continuous weights are scaled by a factor of two, that just changes this proportionality constant. \n\nJust to reiterate my point from my previous comment, keeping the learning rate fixed isn't running the network to convergence, so it doesn't make sense to have a fixed learning rate. \n\nIf you would like to pursue the weight distribution point further, you should demonstrate that your networks converge when you use a fixed learning rate (in particular, train the network with a fixed learning rate, save the parameters, and then reduce the learning rate, and show that the continuous weights don't change after running learning with the reduced learning rate, which I think is unlikely to be the case). Just as another important point, the network performance can stabilize while the weights are still not converged. This is a subtle point because small fluctuations around the weights may not change dot products substantially, but when the training is run for longer, these small fluctuations go away. \n\n\nFurthermore, it isn't clear to me that you can just arbitrarily scale the weights without changing the learning algorithm. This is because there are two scale parameters associated with the weights. First, after each gradient step update, the continuous weights are clipped to be between -1 and +1. So you could scale this clipping threshold. However, there is also the parameter where the backwards pass of the binarization function clips the gradient at -1 and +1 as well ( g(x) = x for |x|< 1, and sign(x) for |x|>=1). This function can't just be scaled in the same way because g(x) needs to match the forward function for large x, and f(x), the binarization function outputs +1 and -1. Scaling the threshold for g(x) breaks the relationship between f and g. \n\nLet me know what you think. ", "If the evidence your theory depends on can go away under different learning rate (a hyper-parameter) setting, I don't believe your theory can still stand. \n\nOr we can put it in another way , let's scale the continuous weights by a fixed number ( for example, 3.0), and we keep the binarization part same. In this way we still have same {-1,+1} weights, and the network will still work (may require some adjustment in learning rate). Clearly we can no longer claim the binarized weights are approximation of the continuous weights here.\n\nHope my input can help us gain better understanding of binarized neural network.\n\n", "Reviewer 2: Pursuant of your suggestions, I added a section that uses our approach to analyze ternary neural networks. The methods that were developed for binary neural networks generalize nicely to TNNs [although it is slightly more complicated as TNNs have an extra parameter in the quantization function]! Thanks for your push to address this subject. The new analysis is included in the latest draft of the paper. If you have other questions, comments, or suggestions, let me know!", "Hello, thanks for your interest in this work! We follow the method of the Courbariaux paper and use ADAM while simultaneously decaying the learning rate from 0.001 to 0.0000003 over 500 epochs of training. I am not sure what analysis you are using. I think that any optimization method that is looking at the continuous weights should decay the learning rate. The discrete nature of the forward function pass of the binarize function sets a scale for the size of the subsequent gradients going backward. This fixed size needs to be scaled down by a reduced learning rate to get to the best values of the continuous weights. ", "The result shown in figure 6d is actually quite different from what we observed in experiments. Distribution of weights are actually highly dependent on the learning rate being used. ", "Reviewer 3: thank you for your detailed questions and suggestions for our paper. \n\na. In this argument, u is the weights, and f, g are the identity function / the pointwise binarize function. The earlier layers don’t impact the weights. I’m not sure I understand your comment. \nb. During training, all of the weights and activations are binarized [except the first layer activations which are the input, and the last layer weights which interface with the output]. We can extract out the values prior to binarization. In this sense, there isn’t any accumulation of errors. In other words, Figs 3 and 5 don’t reflect any accumulation of errors. \nc. The reason for only changing one of the binarization of the weights or activations is that corresponds to removing one of the binarize blocks. However, for the sake of completeness, I included this figure in the SI as well. \nd. The point of the permutation was to generate a distribution with the same marginal statistics, but with no correlational structure. Each set of activations were independently permuted [clarified in the paper]. \ne. Batch normalization subtracts the mean and divides by the standard deviation, then multiplies by a learnable constant and adds a learnable constant. So there is implicitly a learnable constant multiplying the result of the dot products. However, empirically, the learnable additive constant is zero, so the multiplicative constant is not necessary for all but the last layer at test time because the output of the batch norm layer is subsequently binarized with a threshold of zero. \n\nAs far as your question about rotation being a problem with MNIST, the suggestion of our paper is to apply a random rotation to the image as if it is a vector, not to rotate in image space. [Rotation in image space wouldn’t fix the problem because it preserves the correlations between neighboring pixels]. It is the same random rotation for all inputs [added a word to make this more clear]. In the case of MNIST, this is akin to the fact that most of the variance in the dataset happens in the middle of the image and there is almost no variance in the pixels on the edge. The rotation spreads the variance more evenly among the pixels. Of course one potential problem is that the convolutional structure is broken. \n\nOne other point as far as the last section is concerned, a number of papers have reported difficulties with the first layer, and we are the first (to my knowledge) to connect this issue to correlations in the data reducing the effective dimensionality of the input. Maybe it isn’t the most ground breaking point, but it seemed worth including to me. \n\nThanks again for your detailed comments. ", "Reviewer 2: thank you for your consideration of our paper. \n\n1. I agree that our observations are relevant to understanding the work that seeks to use a ternary representation instead of a binary one [or a higher order quantization]. I’m working on some additional experiments, but it requires a substantial amount of work so that will be included in the next revision if I can get it done in time. Thanks for your pointer to the network sketching paper - I think that our work has interesting connections to it. However, an analysis of this paper is outside the scope of this work. \n2. The goal of this paper is to explain why the Courbariaux paper worked as well as it did. I agree that it would be an interesting research direction to improve their work. \n\nAs far as your comment that the paper just does some experiments and presents observations, as the other reviewers have noted, the dotted lines in Fig 2b and 2c are theoretical predictions based on assuming a rotationally invariant distribution for the weights, [the proofs are in the SI], and the colored curves/points are the experimental results. There is a close correspondence between the theory and the experiments. \n\nMore broadly, I agree that it would be great if this analysis led to a technique for improving performance of binary neural networks. However, I believe that the results already paint an insightful picture for understanding binary neural networks that would be useful to share with the community. Regardless, thank you for your suggestions for further experiments, hopefully I can improve the paper to your liking. ", "Reviewer 1: thank you very much for your comments. \n\n* Typos fixed. \n* Ordering of the papers: the Han Deep Compression paper cites the Han Learning weights paper. [So that order is correct]. However, a citation to the DSD paper, which is more recent than those two earlier papers, is missing. I added a citation this to the paper. \n* Fig 2(b) shows the angle distributions separated by layer [and each layer has different dimensional vectors]. Each of these distributions is peaked with some standard deviation. The theory predicts that these distributions have standard deviation that scales as 1/sqrt(d). The plot in 2(c) shows the standard deviation as a function of dimension of the curves in 2(b) with a dotted line that corresponds to 1/sqrt(d) [on a log-log plot]. The dots fall roughly along this dotted line (especially for higher dimensions). For the sake of clarity, I added the zoomed in version of the plot in 2(b) in the appendix. If you have any suggestions for how to make this more clear, I am happy to fix it.\n* The proof for the pdf of g(rho) is a bit involved in the paper that I cited. I came up with a new proof that is quite a bit simpler and included it. " ]
[ 7, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1IDRdeCW", "iclr_2018_B1IDRdeCW", "iclr_2018_B1IDRdeCW", "HJGnzbfVG", "rkeGHyKMG", "SyzXDgfff", "B1OG_Y_GM", "iclr_2018_B1IDRdeCW", "SkhtYrwef", "Ske7rLdeM", "ryDPFMKef" ]
iclr_2018_B1ae1lZRb
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models (often deep networks or wide networks or both) are compute and memory intensive. Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques. We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset. We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
accepted-poster-papers
Meta score: 7 The paper combined low precision computation with different approaches to teacher-student knowledge distillation. The experimentation is good, with good experimental analysis. Very clearly written. The main contribution is in the different forms of teacher-student training combined with low precision. Pros: - good practical contribution - good experiments - good analysis - well written Cons: - limited originality
train
[ "B1QhiA4eG", "rkPqK_tef", "Byv_pHRlM", "B1e0XDKXf", "H1qebLVff", "Hkg4HumZz", "HJ-vEuXWz", "r1n-7OXbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "The authors investigate knowledge distillation as a way to learn low precision networks. They propose three training schemes to train a low precision student network from a teacher network. They conduct experiments on ImageNet-1k with variants of ResNets and multiple low precision regimes and compare performance with previous works\n\nPros:\n(+) The paper is well written, the schemes are well explained\n(+) Ablations are thorough and comparisons are fair\nCons:\n(-) The gap with full precision models is still large \n(-) Transferability of the learned low precision models to other tasks is not discussed\n\nThe authors tackle a very important problem, the one of learning low precision models without comprosiming performance. For scheme-A, the authors show the performance of the student network under many low precision regimes and different depths of teacher networks. One observation not discussed by the authors is that the performance of the student network under each low precision regime doesn't improve with deeper teacher networks (see Table 1, 2 & 3). As a matter of fact, under some scenarios performance even decreases. \n\nThe authors do not discuss the gains of their best low-precision regime in terms of computation and memory.\n\nFinally, the true applications for models with a low memory footprint are not necessarily related to image classification models (e.g. ImageNet-1k). How good are the low-precision models trained by the authors at transferring to other tasks? Is it possible to transfer student-teacher training practices to other tasks?", "The paper aims at improving the accuracy of a low precision network based on knowledge distillation from a full-precision network. Instead of distillation from a pre-trained network, the paper proposes to train both teacher and student network jointly. The paper shows an interesting result that the distilled low precision network actually performs better than high precision network.\n\nI found the paper interesting but the contribution seems quite limited.\n\nPros:\n1. The paper is well written and easy to read.\n2. The paper reported some interesting result such as that the distilled low precision network actually performs better than high precision network, and that training jointly outperforms the traditional distillation method (fixing the teacher network) marginally.\n\nCons:\n1. The name Apprentice seems a bit confusing with apprenticeship learning.\n2. The experiments might be further improved by providing a systematic study about the effect of precisions in this work (e.g., producing more samples of precisions on activations and weights).\n3. It is unclear how the proposed method outperforms other methods based on fine-tuning. It is also quite possible that after fine-tuning the compressed model usually performs quite similarly to the original model.", "Summary:\nThe paper presents three different methods of training a low precision student network from a teacher network using knowledge distillation.\nScheme A consists of training a high precision teacher jointly with a low precision student. Scheme B is the traditional knowledge distillation method and Scheme C uses knowledge distillation for fine-tuning a low precision student which was pretrained in high precision mode.\n\nReview:\nThe paper is well written. The experiments are clear and the three different schemes provide good analytical insights.\nUsing scheme B and C student model with low precision could achieve accuracy close to teacher while compressing the model.\n\nComments:\nTensorflow citation is missing.\nConclusion is short and a few directions for future research would have been useful.", "Based on the feedback we are uploading an edited version of the paper. \nWe added a new \"Appendix section\" and most of the edits are included in this section at the end of the paper.\n\nWe attempted to add more sensitivity studies on precision vs. depth and an experimental evaluation suggested by reviewer#2.\nIn the interest of time (and the machine resources we have), for now, these studies are done on CIFAR-10 dataset. Experimenting with CIFAR-10 allowed us to do many more ablation studies. All these studies highlight the merits of Apprentice scheme (compression and low-precision).\n\nFor the next version, we will include similar studies on ImageNet-1K dataset (we have these experiments currently running but these runs will not finish by the rebuttal deadline). We will also include a better discussion of the benefits of low-precision in terms of savings in compute and memory resources as well as the impact of these savings in resources on inference speed (reviewer#3 suggestion). We (qualitatively) discussed some of these items in section-2 but will definitely expand the discussion in the next version.\n\nThank you again for the feedback. ", "Thank you for your response which has partially addressed my concerns. I think the paper quality could be improved further after the revision. So I have upgraded my review for the paper.", "Thank you for the thorough reviews. \n\nWe defend the \"Cons\" reviews below:\n\nGap from full precision: Agreed. However, compared to prior works the improvement is significant. For example, now with 8-bits activations and 4-bits weight, ResNet-34 without any change in network architecture is only 0.5% off from baseline full precision. This is currently the best Top-1 figure at this precision knob -- the best figure with prior techniques gave 3.3% degradation (so 2.8% improvement with our scheme). We believe that by making the models slightly larger (by 10% or so) we can close the gap between low-precision and full-precision networks -- this is our future work.\n\nTransferability: We believe, this aspect should not change with our scheme for low-precision settings -- we simply better the accuracy of a given network at low-precision (compared to prior proposals). However much the network was useful in transfer learning scenarios with low-precision tensors before, the network right now with our scheme would be similarly useful (if at all better when compared to prior works since we achieve a better accuracy with low-precision).\n\nYour question: Is it possible to transfer student-teacher training practices to other tasks?\nAlthough we did not focus on this aspect in this paper, we found the following 3 works (not an exhaustive list) that use the student-teacher training procedure for other deep-learning domains:\n\n1. Transferring Knowledge from a RNN to a DNN, William Chan et al., ArXiv pre-print, 2015.\n2. Recurrent Neural Network Training With Dark Knowledge Transfer, Zhiyuan Tang et al., ArXiv pre-print, 2016.\n3. Simultaneous Deep Transfer Across Domains and Tasks, Eric Tzeng et al., ICCV 2015.\n\n\nThe observation that accuracy does not improve when using bigger teacher network(s): We allude to this in Discussion part of Section 5.2 (page-8). We mention that the accuracy improvement saturates at some point. We will elaborate on this aspect in the final version of the paper.\n\nWe will also discuss the merits of low precision on savings in compute and memory. We briefly discuss these aspects in Section 2 where we mention about simplification in hardware support required for inference. We will elaborate on these aspects and provide quantification in compute and memory footprint savings vs. accuracy in our final version of the paper.", "Thank you for the reviews. They are useful. \n\nWe defend our contribution aspect below:\n\nContributions: Distillation along with lowering precision has not been studied before. We show benefits of this combined approach - both distillation and low precision target model compression aspect - but when combined the benefits are significant. We also show how one can use the combined distillation and lowering precision approach to training as well as fine-tuning.\n\nOur approach achieves state-of-the-art in accuracy over prior proposals and using our approach we significantly close the gap between full-precision and low-precision model accuracy. We demonstrate the benefits on ImageNet with large networks (ResNet). For example, with ResNet-50 on ImageNet, prior work showed 4.7% accuracy degradation with 8-bits activation and 4-bits weight. We lower this gap to less than 1.5%.\n\nWe believe ours to be the first work that targets both model compression (using knowledge distillation) and low-precision.\n\n\nResponse to the Cons aspects:\n1. We probably did not do a good job describing why we call our approach Apprentice. We will fix this and disambiguate from apprenticeship-based learning schemes.\n\n2. The reason we focus on sub 8-bit precision is that model inference with 8-bits is becoming mainstream and we seek to target next-gen hardware architectures. Also, from a hardware point-of-view 8-bits, 4-bits and 2-bits simplify design (e.g. alignment across cache line boundaries and memory accesses vs. 3-bits or 5-bits precision for example). \n\n3. We had tried the scheme you mention in (3) but the results were not (as) good compared to the schemes we mention in the paper, hence we omitted this scheme from our paper. \n\nWe experimented with ResNet-18 with (a) first, compressing using distillation scheme (used ResNet-34 as the teacher network) and then (b) lowered the precision to ternary mode (fine-tuning for 35 epochs with low learning rate). This experiment was done for ImageNet-1K dataset. This experiment is a variation of scheme-C in our paper where we start with full-precision networks and jointly fine-tune (use distillation scheme with warm start-up).\nActivation precision was 8-bits for this experiment. The ResNet-18 network converged to 33.13% Top-1 error rate. Comparing this with \"jointly\" compressing and lowering precision while training from scratch, we get 32.0% Top-1 error rate (Table-1, 4th row and 2nd column). So, our Apprentice scheme for this network and configuration is 1.13% better.\n\nYour point is well taken and we will include results where first we use knowledge distillation scheme to generate a smaller ResNet model and then lower the precision and fine-tune this small model. Currently, we have results of this scheme with ResNet-18 and few precision knobs and will collect results with this scheme for ResNet-34 and ResNet-50 for the final paper version.\nAs mentioned above, the conclusions of our paper would not change and the new results will show the benefits of joint training with distillation (Apprentice scheme). Many works proposing low-precision knobs advocate for training from scratch or training with warm-startup (from weights at full-precision numerics) -- our work is in line with these observations.\n\n", "Thank you for the reviews and comments.\n\nMissing citation: this is an oversight. We will fix this.\n\nFuture directions for research:\n1. We are currently pursuing the extension of the ideas in this paper to RNNs. Our preliminary studies on a language model for PTB dataset showed promise and based on this we are evaluating a larger data set and model like Deep Speech-2.\n2. Some works proposing low-precision networks advocate for making the layers wider (or the model larger) to recover accuracy at low-precision. These works propose making the layers wider by 2x or 3x. While these works show the benefits of low-precision, making the model larger increases the number of raw computations. Future work could investigate low-precision and less layer widening factor (say 1.10x or 1.25x or ...). This would help inference latency while maintaining accuracy at-par with baseline full-precision.\n3. Another interesting line of investigation for future work is looking into sparsifying networks at low-precision while maintaining baseline level accuracy and using knowledge distillation scheme during this process. As mentioned in Sec 5.5 in our paper, sparsifying a model more than a certain percentage leads to accuracy loss. Investigating hyper-sparse network models without accuracy loss using distillation based schemes is an interesting avenue of further research.\n" ]
[ 7, 7, 8, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1ae1lZRb", "iclr_2018_B1ae1lZRb", "iclr_2018_B1ae1lZRb", "iclr_2018_B1ae1lZRb", "HJ-vEuXWz", "B1QhiA4eG", "rkPqK_tef", "Byv_pHRlM" ]
iclr_2018_H1Dy---0Z
Distributed Prioritized Experience Replay
We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time.
accepted-poster-papers
meta score: 8 The paper present a distributed architecture using prioritized experience replay for deep reinforcement learning. It is well-written and the experimentation is extremely strong. The main issue is the originality - technically, it extends previous work in a limited way; the main contribution is practical, and this is validated by the experiments. The experimental support is such that the paper has meaningful conclusions and will surely be of interest to people working in the field. Thus I would say it is comfortably over the acceptance threshold. Pros: - good motivation and literature review - strong experimentation - well-written and clearly presented - details in the appendix are very helpful Cons: - possibly limited originality in terms of modelling advances
train
[ "HJhRlOzkM", "Hkx8IaKgM", "ry8UxQ6gM", "rkgPS1VEf", "HkpmgU67M", "BkcN9edQM", "BJrW5xu7f", "S1uvLgdXG", "ByN9zlu7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper examines a distributed Deep RL system in which experiences, rather than gradients, are shared between the parallel workers and the centralized learner. The experiences are accumulated into a central replay memory and prioritized replay is used to update the policy based on the diverse experience accumulated by all the of the workers. Using this system, the authors are able to harness much more compute to learn very high quality policies in little time. The results very convincingly show that Ape-X far outperforms competing algorithms such as recently published Rainbow. \n\nIt’s hard to take issue with a paper that has such overwhelmingly convincing experimental results. However, there are a couple additional experiments that would be quite nice:\n•\tIn order to understand the best way for training a distributed RL agent, it would be nice to see a side-by-side comparison of systems for distributed gradient sharing (e.g. Gorila) versus experience sharing (e.g. Ape-X). \n•\tIt would be interesting to get a sense of how Ape-X performs as a function of the number of frames it has seen, rather than just wall-clock time. For example, in Table 1, is Ape-X at 200M frames doing better than Rainbow at 200M frames?\n\nPros:\n•\tWell written and clear.\n•\tVery impressive results.\n•\tIt’s remarkable that Ape-X preforms as well as it does given the simplicity of the algorithm.\n\nCons:\n•\tHard to replicate experiments without the deep computational pockets of DeepMind.\n", "A parallel aproach to DQN training is proposed, based on the idea of having multiple actors collecting data in parallel, while a single learner trains the model from experiences sampled from a central replay memory. Experiments on Atari game playing and two MuJoCo continuous control tasks show significant improvements in terms of training time and final performance compared to previous baselines.\n\nThe core idea is pretty straightforward but the paper does a very good job at demonstrating that it works very well, when implemented efficiently over a large cluster (which is not trivial). I also appreciate the various experiments to analyze the impact of several settings (instead of just reporting a new SOTA). Overall I believe this is definitely a solid contribution that will benefit both practitioners and researchers... as long as they got the computational resources to do so!\n\nThere are essentially two more things I would have really liked to see in this paper (maybe for future work?):\n- Using all Rainbow components\n- Using multiple learners (with actors cycling between them for instance)\nSharing your custom Tensorflow implementation of prioritized experience replay would also be a great bonus!\n\nMinor points:\n- Figure 1 does not seem to be referenced in the text \n- « In principle, Q-learning variants are off-policy methods » => not with multi-step unless you do some kind of correction! I think it is important to mention it even if it works well in practice (just saying « furthermore we are using a multi-step return » is too vague)\n- When comparing the Gt targets for DQN vs DPG it strikes me that DPG uses the delayed weights phi- to select the action, while DQN uses current weights theta. I am curious to know if there is a good motivation for this and what impact this can have on the training dynamics.\n- In caption of Fig. 5 25K should be 250K\n- In appendix A why duplicate memory data instead of just using a smaller memory size?\n- In appendix D it looks like experiences removed from memory are chosen by sampling instead of just removing the older ones as in DQN. Why use a different scheme?\n- Why store rewards and gamma’s at each time step in memory instead of just the total discounted reward?\n- It would have been better to re-use the same colors as in Fig. 2 for plots in the appendix\n- Would Fig. 10 be more interesting with the full plot and a log scale on the x axis?", "This paper proposes a distributed architecture for deep reinforcement learning at scale, specifically, focusing on adding parallelization in actor algorithm in Prioritized Experience Replay framework. It has a very nice introduction and literature review of Prioritized experience replay and also suggested to parallelize the actor algorithm by simply adding more actors to execute in parallel, so that the experience replay can obtain more data for the learner to sample and learn. Not surprisingly, as this framework is able to learn from way more data (e.g. in Atari), it outperforms the baselines, and Figure 4 clearly shows the more actors we have the better performance we will have. \n\nWhile the strength of this paper is clearly the good writing as well as rigorous experimentation, the main concern I have with this paper is novelty. It is in my opinion a somewhat trivial extension of the previous work of Prioritized experience replay in literature; hence the challenge of the work is not quite clear. Hence, I feel adding some practical learnings of setting up such infrastructure might add more flavor to this paper, for example. ", "Thank you for the detailed response and paper revision", "The updated revision includes the changes mentioned in our responses to the reviews. Thanks again to all reviewers for their valuable comments. In addition to the minor fixes and clarifications suggested, we have expanded the implementation section with some of our practical findings, which we hope adds further value to this work for anybody interested in building similar systems.\n\nThere is only one additional change in the new revision that was not previously discussed: a small fix to our description of the per-actor epsilons used in our Atari experiments.\n", "Q6: In appendix D it looks like experiences removed from memory are chosen by sampling instead of just removing the older ones as in DQN. Why use a different scheme?\n\nWe believe that this prioritized removal scheme may improve upon the usual FIFO removal approach, since it allows high priority data to remain in memory for longer. We have not yet re-run the Atari experiments with this newer modification, due to the significant resource requirements - we apologize for the discrepancy and we will add some explanation to make this more explicit.\n\nQ7: Why store rewards and gamma’s at each time step in memory instead of just the total discounted reward?\n\nTo clarify, we are storing the sum of discounted rewards accumulated across each multi-step transition, and the product of gammas across each multi-step transition. While this is not the only way to do it, these are cheap to compute online on each actor worker, and are sufficient to be able to compute up-to-date target values easily on the learner. We will make this more explicit in the implementation section in the appendix.\n\nQ8: Would Fig. 10 be more interesting with the full plot and a log scale on the x axis?\n\nWe tried this but decided it was too difficult to read that way, unfortunately... Since the data is from the same experiment as Figure 9 and the rate of data generation is approximately constant, the information that would be available in a full plot can largely be inferred from Figure 9, though.\n\nQ9: on the other minor points (Fig 1 reference, Fig 5 caption, and Fig 2 plot colors) \n\nThanks! We’ll fix these oversights.", "Thank you for the thorough review! This is a good summary of the paper.\n\nQ1: on using all Rainbow components and on using multiple learners.\n\nThese are both interesting directions which we agree may help to boost performance even further. For this paper, we felt that adding extra components would distract from the finding that it is possible to improve results significantly by scaling up, even with a relatively simple algorithm.\n\nQ2: on sharing the custom Tensorflow implementation of prioritized experience replay.\n\nWe would love to share an implementation, as we have found prioritization to be a consistently helpful component and would like to see it more widely used, but when we looked into this we realized that the current version depends on a library that would require a significant amount of engineering work to open source - so unfortunately we can’t commit to it at this time. However, we will bear this in mind for future versions.\n\nQ3: on multi-step Q-learning not being off-policy.\n\nWe’ll try to clarify this.\n\nQ4: on which weights to use for action selection in DQN vs DPG target computations.\n\nInteresting observation - setting aside the multi-step modification, our Ape-X DQN targets follow the approach described in [1] directly, whilst the Ape-X DPG targets are the same as those described in [2]. For the sake of simplicity in this paper, we were motivated not to deviate from the previously described update rules, in order to focus primarily on the improvements that could be obtained by our modifications to the method of generating and selecting the training data. \n\nHowever, to answer your question on a more technical level, in [2], the motivation given for the use of the target network weights to select the action when computing target values is that “the target values are constrained to change slowly, greatly improving the stability of learning” - the authors of [2] further note that they “found that having both a target µ’ and Q’ was required to have stable targets y_i in order to consistently train the critic without divergence. This may slow learning, since the target network delays the propagation of value estimations. However, in practice we found this was greatly outweighed by the stability of learning”.\n\nIn [1], the update is modified in order to reduce overestimation resulting from the maximization step in Q-learning; they note that “the selection of the action, in the argmax, is still due to the online weights θ_t. This means that, as in Q-learning, we are still estimating the value of the greedy policy according to the current values, as defined by θ_t. However, we use the second set of weights θ’_t to fairly evaluate the value of this policy”.\n\nWe have not yet re-evaluated these choices to determine whether the conclusions still hold in our new system. However, note that in DDPG (and thus also in Ape-X DDPG) there is no maximization step in the critic update, since we are using temporal-difference learning to update the critic instead of Q-learning - so the decoupling of action selection from evaluation used in Double Q Learning does not apply directly anyway. \n\nWe do not claim that these combinations of learning rules and target networks are necessarily the optimal ones, but we hope that this helps to explain the rationale behind the choices used in this paper.\n\n[1] https://arxiv.org/abs/1509.06461\n[2] https://arxiv.org/abs/1509.02971\n\nQ5: in appendix A why duplicate memory data instead of just using a smaller memory size?\n\nConceptually, it would be indeed be sufficient to use a smaller memory to investigate this effect; in fact our results in Figure 5 begin to do this - but we wanted to corroborate the finding by also measuring it in a different way. For implementation reasons, the two approaches are not guaranteed to be equivalent: for example, duplicating the data that each actor adds increases the computational load on the replay server, whereas using a smaller memory size does not. During development we noticed that in very extreme cases, many actors adding large volumes of data to the replay memory could overwhelm it, causing a slowdown in sampling which would affect the performance of the learner and thus the overall results.\n \nIn our experiments in Appendix A where we sought to determine whether recency of data was the reason for our observed scalability results, we wanted to make certain that the load on the replay server in the duplicated-data experiments would be the same as in the experiments with the corresponding numbers of real actors, to ensure a fair comparison. In practice, we did not find that we were running into any such contention issues in these experiments, and the results from Figure 5 do agree with those in Appendix A. However, we felt that it was still helpful to include both of the results in order to cover this aspect thoroughly. We will add a note explaining this.\n\n\n", "Thank you very much for the review. This is a good summary of the paper.\n\nQ1: on side-by-side comparison of systems for distributed gradient sharing (e.g. Gorila) versus experience sharing (e.g. Ape-X). \n\nA thorough exploration and comparison of these approaches would be valuable, but we believe that fairly and rigorously investigating this large space of possible designs is likely to be a complex topic unto itself, and it would not be possible to do it justice in this paper. Performance comparisons of such systems will likely depend on practical factors such as network latency (due to stale gradients or straggling workers) as well as model size and the size of the observation data (since this will affect the throughput across the distributed system). Ultimately we believe that distributed gradient sharing and distributed experience sharing will prove complementary, but that the nuances of how to optimally combine them will therefore depend on not only the domain but also the nature and distribution of the available computational resources.\n\nQ2: on how Ape-X performs as a function of the number of frames it has seen, rather than just wall-clock time.\n\nIn case you missed it, Figure 10 in the Appendix includes plots of performance against number of frames seen for the first billion frames, with comparisons against Rainbow and DQN. Note, however, that in all of these algorithms, the amount of experience replay per environment step may be varied, and this factor can have a significant effect on such results.\n", "Thank you for your comments and for this helpful suggestion. Our work is indeed closely related to the previous work on Prioritized Experience Replay. In achieving our reported results in practice, there was considerable challenge in two aspects: firstly, in the engineering work necessary to run the algorithm at a large scale, and secondly, in the discovery through empirical experimentation of a) the necessary algorithmic extensions to the prior work upon which we built, and b) the best way in which to combine them in practice. The point is well taken that the difficulty of this may not have been evident from our description, since in the paper we opted to focus more on our final architecture and results, believing this to be of greater interest to most readers. Indeed, we covered the implementation only briefly in the Appendix, and, as you note, we did not discuss our practical learnings in much depth. We are happy to hear that this is also of interest and we will gladly expand upon this section to provide further information and advice." ]
[ 9, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1Dy---0Z", "iclr_2018_H1Dy---0Z", "iclr_2018_H1Dy---0Z", "BkcN9edQM", "iclr_2018_H1Dy---0Z", "BJrW5xu7f", "Hkx8IaKgM", "HJhRlOzkM", "ry8UxQ6gM" ]
iclr_2018_B1Gi6LeRZ
Learning from Between-class Examples for Deep Sound Recognition
Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level.
accepted-poster-papers
meta score: 8 This is a good paper which augments the data by mixing sound classes, and then learns the mixing ratio. Experiments performed on a number of sound classification results Pros - novel approach, clearly explained - very good set of experimentation with excellent results - good approach to mixing using perceptual criteria Cons - discussion doesn't really generalise beyond sound recognition
train
[ "ryKFRjXBf", "BJ3tjUzSG", "ByW22ClHf", "BJ79_7RNM", "S103m19Nf", "HJmtVULeG", "HJ80q6KlG", "r1vicbqeG", "S1O3zOp7f", "BJlPLX4Mz", "SJp7SQEGz", "rJIDfQNfM" ]
[ "author", "author", "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We have revised the paper, considering the comments from AnonReviewer3.\nMajor changes:\n- The last paragraph of Section 1: modified the description about the novelty of our paper.\n- Section 3.3.2: added the description about the dimension of the feature space.", "Thanks for your questions.\n1. Each fold has 400 samples. We did not have test samples, and testing was done only on the validation fold. The model was trained with 4 folds (1600 samples) and tested with 1 fold (400 samples). We used the original fold settings defined by the proposer of ESC-50 (not random division). \n2. Epochs in Figure 5 represent single iteration over the 1600 training samples.\n3. We performed 5-fold cross-validation 5 times. Thus, the errors in Table 1 and Figure 5 represent the average of (5x5=) 25 errors.\n4. We did 10-crop testing every epoch.", "Hello,\nI am trying to reproduce your results, as part of the reproducibility challenge.\nThe database that I am using is ESC-50, with DSRNet.\nI have several questions regarding the training/results of the baseline network (no BC learning):\n1. Did you split the data to 5 cross validation sections, each one with 400 samples (2000/5)? Did you have test samples, besides the validation folds? or the testing was done only on the validation folds?\n2. Did the epochs in Figure 5 represent single iteration over the data base, or 5 iterations, because of the cross validation?\n3. How did you calculate the Error rate in table 1 and figure 5? Is it the error of the current fold? average between the errors of the 5 folds?\n4. When you write that during testing, you cropped 10 T-s sections, Did you do that on the validation fold every epoch, or on a different testing data?\nThanks\n", "Thanks for your positive comments and advice. We will modify the two points you have suggested (will be uploaded in a few days).", "After reading all the reviews and the authors' responses I have a more clear view of the paper now. I changed my mind, I see now the interest of having this paper accepted.\n\nHaving said that, I strongly encourage the authors to modify the text so that two key aspects of the method are clearly explained.\nFirst, as you said in your comments: The novelty or key point of our method is not mixing multiple sounds, but rather learning method of training the model to output the mixing ratio.\nSecond, the limitation (which now is not a problem, but could be for some other applications/architectures): the dimension of the feature space d is generally designed to be larger than the number of classes c.\n", "Overall: Authors defined a new learning task that requires a DNN to predict mixing ratio between sounds from two different classes. Previous approaches to training data mixing are (1) from random classes, or (2) from the same class. The presented approach mixes sounds from specific pairs of classes to increase discriminative power of the final learned network. Results look like significant improvements over standard learning setups.\n\nDetailed Evaluation: The approach presented is simple, clearly presented, and looks effective on benchmarks. In terms of originality, it is different from warping training example for the same task and it is a good extension of previously suggested example mixing procedures with a targeted benefit for improved discriminative power. The authors have also provided extensive analysis from the point of views (1) network architecture, (2) mixing method, (3) number of labels / classes in mix, (4) mixing layers -- really well done due-diligence across different model and task parameters.\n\nMinor Asks:\n(1) Clarification on how the error rates are defined. Especially since the standard learning task could be 0-1 loss and this new BC learning task could be based on distribution divergence (if we're not using argmax as class label).\n(2) #class_pairs targets as analysis - The number of epochs needed is naturally going to be higher since the BC-DNN has to train to predict mixing ratios between pairs of classes. Since pairs of classes could be huge if the total number of classes is large, it'll be nice to see how this scales. I.e. are we talking about a space of 10 total classes or 10000 total classes? How does num required epochs get impacted as we increase this class space?\n(3) Clarify how G_1/20 and G_2/20 is important / derived - I assume it's unit conversion from decibels.\n(4) Please explain why it is important to use the smoothed average of 10 softmax predictions in evaluation... what happens if you just randomly pick one of the 10 crops for prediction?", "This manuscript proposes a method to improve the performance of a generic learning method by generating \"in between class\" (BC) training samples. The manuscript motivates the necessity of such technique and presents the basic intuition. The authors show how the so-called BC learning helps training different deep architectures for the sound recognition task.\n\nMy first remark regards the presentation of the technique. The authors argue that it is not a data augmentation technique, but rather a learning method. I strongly disagree with this statement, not only because the technique deals exactly with augmenting data, but also because it can be used in combination to any learning method (including non-deep learning methodologies). Naturally, the literature review deals with data augmentation technique, which supports my point of view.\n\nIn this regard, I would have expected comparison with other state-of-the-art data augmentation techniques. The usefulness of the BC technique is proven to a certain extent (see paragraph below) but there is not comparison with state-of-the-art. In other words, the authors do not compare the proposed method with other methods doing data augmentation. This is crucial to understand the advantages of the BC technique.\n\nThere is a more fundamental question for which I was not able to find an explicit answer in the manuscript. Intuitively, the diagram shown in Figure 4 works well for 3 classes in dimension 2. If we add another class, no matter how do we define the borders, there will be one pair of classes for which the transition from one to another will pass through the region of a third class. The situation worsens with more classes. However, this can be solved by adding one dimension, 4 classes and 3 dimensions seems something feasible. One can easily understand that if there is one more class than the number of dimensions, the assumption should be feasible, but beyond it starts to get problematic. This discussion does not appear at all in the manuscript and it would be an important limitation of the method, specially when dealing with large-scale data sets.\n\nOverall I believe the paper is not mature enough for publication.\n\nSome minor comments:\n- 2.1: We introduce --> We discussion\n- Pieczak 2015a did not propose the extraction of MFCC.\n- the x_i and t_i of section 3.2.2 should not be denoted with the same letters as in 3.2.1.\n- The correspondence with a semantic feature space is too pretentious, specially since no experiment in this direction is shown.\n- I understand that there is no mixing in the test phase, perhaps it would be useful to recall it.", "The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation. \n\nIn equation 2, please check the measure of the mixture. Why not simply use a dB criteria ?\n\nThe comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform.\n\nThe authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested.\n\nSection 3.2.2 shall be placed later on, and clarified.\n\nDiscussion on mixing more than two sounds leads could be completed by associative properties, we think... ?\n", "We have uploaded a revised version of the paper. \n\nMajor changes:\n- Section 2.1: modified the description of Piczak's work.\n- Section 3.1: noted that there is no mixing in testing phase.\n- Section 3.2.1: clarified the deviation and meaning of equation 2.\n- Section 3.2.2: modified the indices of x and t.\n- Section 4.1.4: added experiments and discussion on # of training epochs vs. # of classes.\n- Total: 10 pages -> 9 pages (for main part)\n", "Thanks for your positive review. Our method is novel in that we train the model to output the mixing ratio between two different classes.\n\nAnswers for minor asks:\n(1) We do not define error rate of BC learning in training phase. In testing phase, the error rate definition of BC learning is the same as that of standard learning because we do not mix any sounds in testing phase.\n\n(2) Thanks for helpful advice. Although we could not try more than 50 classes, we have investigated the relationship between performance and the number of training epochs not only on ESC-50 (50 classes, Fig. 6) but also ESC-10 (10 classes). As a result, the sufficient number of training epochs for BC learning on ESC-10 was 900, which is smaller than that for BC learning on ESC-50 (1,200 epochs), whereas that for standard learning was 600 epochs on both ESC-50 and ESC-10. We assume that the number of training epochs needed would become large when there are many classes, as you have suggested. We will add this discussion to the final version.\n\n(3) Yes, G_1 and G_2 are derived from unit conversion from decibels to amplitudes. We will clarify it. Please see also the reply to AnonReviewer2.\n\n(4) We have tried random 1-crop testing and center 1-crop testing on EnvNet on ESC-50 (standard learning). The error rates of random 1-crop testing and center 1-crop testing were 41.3% and 39.2%, respectively, whereas that of 10-crop testing was 29.2% as in the paper. Averaging the predictions of multiple windows leads to a stable performance. We assume this is because we cannot know where the target sound exists in a testing sound, and the target sound sometimes has a long duration.", "Thanks for your helpful review.\n\n- Regarding the presentation of BC learning:\nIt is true that BC learning is a data augmentation method as you have suggested, from a view point of using augmented data. However, our method is novel in that we change the objective of training by training the model to output the mixing ratio, which is fundamentally a different idea from previous data augmentation methods. The novelty or key point of our method is not mixing multiple sounds, but rather learning method of training the model to output the mixing ratio. That is why we represent our method as \"learning method.\" We intuitively describe why such a learning method is effective in Section 3.3 and demonstrate the effectiveness of BC learning through wide-ranging experiments.\n\n- Regarding comparison with other data augmentation methods:\nFirst, we compared BC learning with other data augmentation methods that mix multiple sounds in ablation analysis (see Table 2), and showed that our method of mixing just two classes with equation 2 and training the model to output the mixing ratio performs the best.\n\nSecond, our BC learning can be combined with any data augmentation methods that do not mix multiple sounds by mixing two augmented data. In Section 4.1.3, we demonstrated that BC learning is even \"compatible\" with a strong data augmentation, which we believe is more important than being \"stronger\" than that. This data augmentation method uses scale and amplitude augmentation similar to Salamon & Bello (2017) in addition to padding and cropping, and thus, it is close to the state-of-the-art level. As shown in Table 1, the error rates of DSRNet when using only BC learning (18.2%, 10.6%, and 23.4% on ESC-50, ESC-10, and UrbanSound8K, respectively) were lower than those when using the strong data augmentation (21.2%, 10.9%, and 24.9%). Furthermore, as a result of combination of BC learning and the strong data augmentation, we achieved a further higher performance (15.1%, 8.6%, and 21.7%). In this way, we demonstrated the strongness and compatibility of BC learning with other data augmentation techniques through various experiments.\n\nHere, we assume that the effect of BC learning is even strengthened when using a stronger data augmentation scheme. Because the potential within-class variance becomes large when using a strong data augmentation, the overlap between the feature distribution of each class and that of mixed sounds tends to become large and it becomes more difficult for model to output the mixing ratio (see also Fig. 2). Therefore, the effect of enlargement of Fisher’s criterion would become stronger.\n\n- Regarding the limitation of BC learning:\nThanks for your advice. What you have pointed out is correct. However, the dimension of the feature space d is generally designed to be larger than the number of classes c (e.g., EnvNet/DSRNet: 4096; SoundNet: 256; M18: 512; and Logmel-CNN: 5000). If d < c-1, the features cannot sufficiently represent categorical information, and the model would not be able to achieve a good performance. We have tried to train an EnvNet whose dimension of fully connected layer was made less than 49 on ESC-50 with standard learning, but the loss did not begin to decrease. It is not a matter of BC learning. Furthermore, even if there is a network whose d is smaller than c-1, BC learning would enlarge Fisher's criterion and regularize the positional relationship as much as possible. Therefore, we do not think it is an important limitation of BC learning.\n\n\nThanks for other helpful comments. We will reflect them to the final version. Note than we showed the correspondence with a semantic feature space by visualizing the features of mixed sounds in Fig. 3.", "Thanks for your positive review. \n\n- Regarding equation 2:\nWe use 10^(G_1/20) and 10^(G_2/20) instead of simple G_1 and G_2 to convert decibels to amplitudes. We hypothesize that the ratio of auditory perception for the network is the same as the ratio of amplitude, and define p so that the auditory perception of the mixed sound becomes r: (1-r). This is because the main component functions of CNNs, such as conv/fc, relu, max pooling, and average pooling, satisfy homogeneity (i.e., f(ax) = af(x)) if we ignore the bias. We will clarify the derivation and meaning of equation 2.\n\n- Regarding how to extend BC learning to other modalities:\nWe assume that BC learning can also be applied to image classification. Image data can be treated as 2-D waveforms along x- and y- axis that contain various areas of frequency information in quite a similar manner to sound data. In addition, recent studies on speech/sound recognition have demonstrated that each filter of CNNs learns to respond to a particular frequency area (e.g., Sainath et al., 2015b). Considering them, we assume that CNNs have aspect of recognizing images by treating them as waveforms in a similar manner to how they recognize sounds, and what works on sounds must also work on images. A simple mixing method (r x_1 + (1-r) x_2) would work well, but we assume that a mixing method that treats the images as waveforms (similar to equation 2) leads to a further performance improvement.\n\n- Regarding mixing more than two classes:\nMixing more than two classes would have a similar effect to mixing just two classes. However, the number of class combinations dramatically increases, and it would become difficult to train. Mixing just two classes can directory impose a constraint on the feature distribution (as we describe in Section 3.3). Therefore, we assume that mixing just two classes is the most efficient. Experimental results also show that mixing two classes performs better than mixing three classes (see Table 2).\n\nThanks for other helpful comments. We will reflect them to the final version." ]
[ -1, -1, -1, -1, -1, 9, 4, 8, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B1Gi6LeRZ", "ByW22ClHf", "iclr_2018_B1Gi6LeRZ", "S103m19Nf", "SJp7SQEGz", "iclr_2018_B1Gi6LeRZ", "iclr_2018_B1Gi6LeRZ", "iclr_2018_B1Gi6LeRZ", "iclr_2018_B1Gi6LeRZ", "HJmtVULeG", "HJ80q6KlG", "r1vicbqeG" ]
iclr_2018_ryiAv2xAZ
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.
accepted-poster-papers
Meta score: 6 The paper approaches the problem of identifying out-of-distribution data by modifying the objective function to include a generative term. Experiments on a number of image datasets. Pros: - clearly expressed idea, well-supported by experimentation - good experimental results - well-written Cons: - slightly limited novelty - could be improved by linking to work on semi-supervised learning approaches using GANs The authors note that ICLR submission 267 (https://openreview.net/forum?id=H1VGkIxRZ) covers similar ground to theirs.
train
[ "B1ja8-9lf", "B1klq-5lG", "By_HQdCeG", "Sk-WbjBMz", "HypDejBMM", "SJyZlsSMG", "ry5v1orGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6.\n\n-----------------------------\n\nThis paper presents a novel approach to calibrate classifiers for out of distribution samples. In additional to the original cross entropy loss, the “confidence loss” was proposed to guarantee the out of distribution points have low confidence in the classifier. As out of distribution samples are hard to obtain, authors also propose to use GAN generating “boundary” samples as out of distribution samples. \n\nThe problem setting is new and objective (1) is interesting and reasonable. However, I am not very convinced that objective (3) will generate boundary samples. Suppose that theta is set appropriately so that p_theta (y|x) gives a uniform distribution over labels for out of distribution samples. Because of the construction of U(y), which uniformly assign labels to generated out of distribution samples, the conditional probability p_g (y|x) should always be uniform so p_g (y|x) divided by p_theta (y|x) is almost always 1. The KL divergence in (a) of (3) should always be approximately 0 no matter what samples are generated. \n\nI also have a few other concerns: \n1. There seems to be a related work: \n[1] Perello-Nieto et al., Background Check: A general technique to build more reliable and versatile classifiers, ICDM 2016, \nWhere authors constructed a classifier, which output K+1 labels and the K+1-th label is the “background noise” label for this classification problem. Is the method in [1] applicable to this paper’s setting? Moreover, [1] did not seem to generate any out of distribution samples. \n\n2. I am not so sure that how the actual out of distribution detection was done (did I miss something here?). Authors repeatedly mentioned “maximum prediction values”, but it was not defined throughout the paper. \nAlgorithm 1. is called “minimization for detection and generating out of distribution (samples)”, but this is only gradient descent, right? I do not see a detection procedure. Given the title also contains “detecting”, I feel authors should write explicitly how the detection is done in the main body. \n", "The manuscript proposes a generative approach to detect which samples are within vs. out of the sample space of the training distribution. This distribution is used to adjust the classifier so it makes confident predictions within sample, and less confident predictions out of sample, where presumably it is prone to mistakes. Evaluation on several datasets suggests that accounting for the within-sample distribution in this way can often actually improve evaluation performance, and can help the model detect outliers.\n\nThe manuscript is reasonably well written overall, though some of the writing could be improved e.g. a clearer description of the cost function in section 2. However, equation 4 and algorithm 1 were very helpful in clarifying the cost function. The manuscript also does a good job giving pointers to related prior work. The problem of interest is timely and important, and the provided solution seems reasonable and is well evaluated.\n\nLooking at the cost function and the intuition, the difference in figure 1 seems to be primarily due to the relative number of samples used during optimization -- and not to anything inherent about the distribution as is claimed. In particular, if a proportional number of samples is generated for the 50x50 case, I would expect the plots to be similar. I suggest the authors modify the claim of figure 1 accordingly.\n\nAlong those lines, it would be interesting if instead of the uniform distribution, a model that explicitly models within vs. out of sample might perform better? Though this is partially canceled out by the other terms in the optimization.\n\nFinally, the authors claim that the PT is approximately equal to entropy. The cited reference (Zhao et. al. 2017) does not justify the claim. I suggest the authors remove this claim or correctly justify it.\n\nQuestions:\n - Could the authors comment on cases where such a strong within-sample assumption may adversely affect performance?\n - Could the authors comment on how the modifications affect prediction score calibration?\n - Could the authors comment on whether they think the proposed approach may be more resilient to adversarial attacks?\n\nMinor issues:\n - Figure 1 is unclear using dots. Perhaps the authors can try plotting a smoothed decision boundary to clarify the idea?", "This paper proposes a new method of detecting in vs. out of distribution samples. Most existing approaches for this deal with detecting out of distributions at *test time* by augmenting input data and or temperature scaling the softmax and applying a simple classification rule based on the output. This paper proposes a different approach (with could be combined with these methods) based on a new training procedure. \n\nThe authors propose to train a generator network in combination with the classifier and an adversarial discriminator. The generator is trained to produce images that (1) fools a standard GAN discriminator and (2) has high entropy (as enforced with the pull-away term from the EBGAN). Classifier is trained to not only maximize classification accuracy on the real training data but also to output a uniform distribution for the generated samples. \n\nThe model is evaluated on CIFAR-10 and SVNH, where several out of distribution datasets are used in each case. Performance gains are clear with respect to the baseline methods.\n\nThis paper is clearly written, proposes a simple model and seems to outperform current methods. One thing missing is a discussion of how this approach is related to semi-supervised learning approaches using GANS where a generative model produces extra data points for the classifier/discriminator. \n\n I have some clarifying questions below:\n- Figure 4 is unclear: does \"Confidence loss with original GAN\" refer to the method where the classifier is pretrained and then \"Joint confidence loss\" is with joint training? What does \"Confidence loss (KL on SVHN/CIFAR-10)\" refer to?\n\n- Why does the join training improve the ability of the model to generalize to out-of-distribution datasets not seen during training?\n\n- Why is the pull away term necessary and how does the model perform without it? Most GAN models are able to stably train without such explicit terms such as the pull away or batch discrimination. Is the proposed model unstable without the pull-away term? \n\n- How does this compare with a method whereby instead of pushing the fake sample's softmax distribution to be uniform, the model is simply a trained to classify them as an additional \"out of distribution\" class? This exact approach has been used to do semi supervised learning with GANS [1][2]. More generally, could the authors comment on how this approach is related to these semi-supervised approaches? \n\n- Did you try combining the classifier and discriminator into one model as in [1][2]?\n\n[1] Semi-Supervised Learning with Generative Adversarial Networks (https://arxiv.org/abs/1606.01583)\n[2] Good Semi-supervised Learning that Requires a Bad GAN (https://arxiv.org/abs/1705.09783)", "We very much appreciate valuable comments, efforts, and time of the reviewers. We first address a common concern of the reviewers and other issues for each individual one separately.\n\nQ. Comparison between our model and K+1 classes model (within vs. out of distributions model)\n\nA. As all reviewers mentioned, one might want to add the new class for out-of-distribution in the softmax distribution. We actually considered this idea, but did not try since (a) modeling explicitly out-of-distribution can incur overfitting, (b) this prevents us from using the prior inference/detection methods [1,2] (since they do not assume such a new out-of-distribution label) and (c) forcing the softmax distribution be close to uniform is expected to provide additional regularization effects [3]. In other words, we choose a way to design network architectures so that the original classification performance does not hurt and all prior inference algorithms can be still utilized (and further improved under our training method). As another option, one can try complex density estimators such as PixelCNN [4] to model within vs. out of distributions explicitly, but they are quite difficult to train. Our method is much simpler and easier to use. \n\nWe remind that our primary goal is training confident classifiers of standard architectures rather than developing a new architecture for detecting out-of-distribution samples. Please note that this design choice leads to another unexpected advantage: our method can even improve the calibration performance for multi-class classification for in-distribution samples, meaning that a classifier trained by our method can indicate when they are likely to be correct or incorrect for test samples. More specifically, the expected calibration error (ECE) [2] of a classifier trained by our method is lower than that of a classifier trained by the standard cross entropy loss. We also reported the corresponding experimental results in the revision (see Appendix C.2). \n\nFor interested readers, we also report experimental results (see Appendix E) to this revision, demonstrating that adding a new class for out-of-distribution is worse than forcing the existing class distribution uniform.\n\n[1] Shiyu Liang, Yixuan Li, and R Srikant. Principled detection of out-of-distribution examples in neural networks. arXiv preprint arXiv:1706.02690, 2017. (https://arxiv.org/abs/1706.02690)\n[2] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017. (https://arxiv.org/abs/1706.04599) \n[3] Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł. and Hinton, G. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017. (https://arxiv.org/abs/1701.06548)\n[4] Oord, A.V.D., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A. and Kavukcuoglu, K. Conditional Image Generation with PixelCNN Decoders. In NIPS, 2016. (https://arxiv.org/abs/1606.05328) \n\nThanks,\nAuthors", "We very much appreciate your valuable comments, efforts and times on our paper. We provide our responses for all questions below. Revised parts in the new draft are colored by blue.\n\nQ1: \"Figure 4 is unclear.\"\n\nA1: First, \"Confidence loss with original GAN\" corresponds to a variant of confidence loss (1) which trains a classifier by optimizing the KL divergence term using samples from a pre-trained original/standard GAN, i.e., GAN generates in-distribution samples. Next, \"Joint confidence loss\" is the proposed loss (4) optimized by Algorithm 1. Here, we remark that only \"Joint confidence loss\" optimizes the KL divergence terms using implicit samples from the proposed GAN, i.e., GAN generates \"boundary\" samples in the low-density area of in-distribution. Finally, \"Confidence loss (KL on SVHN/CIFAR-10)\" corresponds to the confidence loss (1) using explicit out-of-distribution samples (SVHN or CIFAR-10). For example, \"Confidence loss (KL on SVHN)\" refers to the method where the KL divergence term in the confidence loss (1) is optimized using SVHN training data. In the revision, we clarified the notations such that the KL divergence term is optimized on samples indicated in the parentheses, i.e., \"Confidence loss with original GAN\" and \"Confidence loss (KL on SVHN/CIFAR-10)\" were revised to \"Confidence loss (samples from original GAN)\" and \"Confidence loss (SVHN/CIFAR-10)\", respectively. We updated Figure 2 and Figure 4 accordingly.\n\nQ2: \"Why does the joint training improve the ability of the model to generalize to out-of-distribution datasets not seen during training?\"\n\nA2: It is explained in Section 2.3. In Section 2.1, we suggest to use out-of-distribution samples for training a confident classifier. Conversely, in Section 2.2., we suggest to use a confident classifier for training a GAN generating out-of-distribution samples. Namely, two models can be used for improving each other. Hence, this naturally suggests a joint training scheme in Section 2.3 for confident classifier and the proposed GAN, where both improve as the training proceeds. We emphasize the effect of joint training again in the revision. Please see our revision of Section 2.3 for details.\n\nQ3: \"Why is the pull away term necessary and how does the model perform without it?\"\n\nA3: We really appreciate your valuable comments.\n\nThe pull away term (PT) is not related to \"stability.\" Our intuition was that the entropy of out-of-distribution is expected to be much higher compared to that of in-distribution since the out-of-distribution is typically on a much larger space than the in-distribution. Consequently, we expected that optimizing the PT term is useful for generating better out-of-distribution samples. \nWe also note that the PT was recently used [2] for a similar purpose as ours.\n\nHowever, since we suggest to generate out-of-distribution samples nearby in-distribution (for efficient sampling purpose), its entropy might be not that high and the effect of PT is not clear. After our submission, we actually verified that PT sometimes helps (but not always), and its gains are relatively marginal in overall. Since PT increases the training complexity, we decided to remove the PT in the revision and have updated all experimental results without using PT. Still, for interested readers, we also report the effects of PT in the Appendix D. We updated Section 2.2 and 2.3, Figure 3, 4 and 5, and Appendix D, accordingly.\n\nQ4: \"How is this approach related to the semi-supervised approaches in [1][2]? Did you try combining the classifier and discriminator into one model as in [1][2]?\"\n\nA4: As briefly mentioned in Section 4, we expect that our proposed GAN might be useful for semi-supervised settings. Also, we actually thought about combining the classifier and discriminator into one model, i.e., adding K+1 class. However, we choose a more \"conservative\" way to design network architectures so that the original classification performance does not degrade. Extension to semi-supervised learning should be an interesting future direction to explore.\n\n[1] Odena, A. Semi-supervised learning with generative adversarial networks. In NIPS, 2016. (https://arxiv.org/abs/1606.01583)\n[2] Dai, Z., Yang, Z., Yang, F., Cohen, W.W. and Salakhutdinov, R. Good Semi-supervised Learning that Requires a Bad GAN. In NIPS, 2017. (https://arxiv.org/abs/1705.09783) \n\nThanks,\nAuthors", "We very much appreciate your valuable comments, efforts and times on our paper. We provide our responses for all questions below. Revised parts in the new draft are colored by blue.\n\nQ1: \"About the difference in figure 1.\"\n\nA1: First, we emphasize that we use the same number (i.e., 100) of training out-of-distribution samples for Figure 1(a)/(b) and 1(c)/(d). As you pointed out, if one increases the number of training out-of-distribution samples for the 50x50 case, Figure 1(b) is expected to be similar to Figure 1(d). In other words, one needs more samples in order to train confidence classifier if samples are generated from the entire space, i.e., 50x50. However, as we mentioned, this might be impossible and not efficient since the number of out-of-distribution training samples might be almost infinite to cover its entire, high-dimensional data space. Therefore, instead, we suggest to sample out-of-distribution close to in-distribution, which could be more effective (given the fixed sampling complexity). The difference in Figure 1 confirms such intuition. We clarified this more in the revision (Section 2.1).\n\nQ2: \"Justification of PT.\"\n\nA2: We agree with you that Zhao et al. did not justify the claim. But, the PT corresponds to the squared cosine similarity of generated samples, and intuitively one can expect the effect of increasing the entropy by minimizing it. In the recent work [1], the authors also used PT to maximize the entropy. However, as we mentioned in A3 for Reviewer_1, after our submission, we actually verified that PT helps, but its gains are relatively marginal in overall. Since PT increases the training complexity, we decided to remove the PT in the revision and have updated all experimental results without using PT. Finally, for interested readers, we also report the effects of PT in the Appendix D. We really appreciate your valuable comments. We updated Section 2.2 and 2.3. Figure 3, 4 and 5. Appendix D. accordingly.\n\nQ3: \"About cases where such a strong within-sample assumption may adversely affect performance.\"\n\nA3: As shown in Table 1 and Table 2 (in Appendix C), splitting in- and out-of-distributions and optimizing the confidence loss (1) does not adversely affect the classification accuracy due to the high expressive power of deep neural networks in all our experiments. We haven't found a case where our proposed method (based on this assumption) leads to adverse performance. However, more theoretical investigation on whether this assumption guarantees a good performance or whether there is a counterexample would be an interesting future work.\n\nQ4: \"How do the modifications affect prediction score calibration?\"\n\nA4: Thank you for your great suggestion. After our submission, we actually verified that our method can improve the prediction score calibration. For example, the expected calibration error (ECE) [2] of a classifier trained by our method is lower than that of a classifier trained by the standard cross entropy loss. For interested readers, we reported the corresponding experimental results in the revision (see Appendix C.2).\n\nQ5: \"Whether the proposed approach may be more resilient to adversarial attacks.\"\n\nA5: This is a very interesting question. We believe our method has some potential for being more resilient to adversarial attacks. This is because adversarial examples are special types of out-of-distribution samples. We believe that this should be an interesting future direction to explore.\n\n[1] Shiyu Liang, Yixuan Li, and R Srikant. Principled detection of out-of-distribution examples in neural networks. arXiv preprint arXiv:1706.02690, 2017. (https://arxiv.org/abs/1706.02690)\n[2] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017. (https://arxiv.org/abs/1706.04599) \n\nThanks,\nAuthors.", "We very much appreciate your valuable comments, efforts and times on our paper. We provide our responses for all questions below. Revised parts in the new draft are colored by blue.\n\nQ1: \"I am not very convinced that objective (3) will generate boundary samples.\"\n\nA1: As you pointed out, the KL divergence term (a) of (3) is approximately 0 no matter how out-of-distribution samples are generated. However, if the samples are far away from \"boundary\" (here, we assume the high-density area of in-distribution is a closed set), the GAN loss (b) of (3) should be high, i.e., the GAN loss forces having samples being not too far from the in-distribution space. This is the primary reason why (3) will generate out-of-distribution samples from the low-density boundary of the in-distribution space. We provided more explanations in the revision (please see updated Section 2.2 for details).\n\nQ2: \"There seems to be a related work:\n[1] Perello-Nieto et al., Background Check: A general technique to build more reliable and versatile classifiers, ICDM 2016, Where authors constructed a classifier, which output K+1 labels and the K+1-th label is the \"background noise\" label for this classification problem. Is the method in [1] applicable to this paper's setting? Moreover, [1] did not seem to generate any out of distribution samples.\"\n\nA2: As you mentioned, the Background Check (BC) proposed by [1] can be applied to our setting, i.e., one can consider background distribution in [1] as out-of-distribution. The authors propose two methods called discriminative approach (BCD) and familiarity approach (BCF). First, BCD requires out-of-distribution samples, but they mentioned generating artificial background data is hard and did not try in their experiments. This is what we resolve in this paper. On the other hand, BCF does not require out-of-distribution samples, and instead it uses a density estimator of P_{in}(x) such as one-class support vector machine (OCSVM) for modeling in-distributions. Such an additional model not only increases the detection complexity, but also is not clear to perform well in the high-dimensional datasets used in our paper. For example, one can try complex density estimators such as PixelCNN [2], but they are quite difficult to train and should be chosen depending on data characteristics (this hurts the generality of our work). Our method is much simpler and easier to use. In addition, please note that the authors [1] did not report any neural network experiments at all.\n\nQ3: \"Authors repeatedly mentioned \"maximum prediction values\", but it was not defined throughout the paper.\"\n\nA3: The \"maximum prediction value\" corresponds to a maximum value of the predictive distribution, i.e., \\max_y P(y|x). We formally defined this in the revision (Section 2.1).\n\nQ4: \"I am not so sure that how the actual out of distribution detection was done (did I miss something here?). Algorithm 1. is called \"minimization for detection and generating out of distribution (samples)\", but this is only gradient descent, right? I do not see a detection procedure. Given the title also contains \"detecting\", I feel authors should write explicitly how the detection is done in the main body.\"\n\nA4: Our goal is to develop a new training method (Algorithm 1 is the training algorithm), which works with a simple detection method. For the actual detecting procedure, one can apply any known inference algorithms [3,4,5] on a trained model. Here, we remark that the performance of these detectors highly depends how the classifier is trained, e.g., as shown in Table 1 and Figure 4, the detection performance of prior inference algorithms can be dramatically improved under a confident classifier trained by our method. We explained the detection procedure more precisely in the revision, as mentioned in the beginning of Section 2 and more formally defined in the Appendix A. Thank you for your suggestion.\n\n[1] Perello-Nieto et al., Background Check: A general technique to build more reliable and versatile classifiers, ICDM 2016\n[2] Oord, A.V.D., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A. and Kavukcuoglu, K. Conditional Image Generation with PixelCNN Decoders. In NIPS, 2016. \n[3] Shiyu Liang, Yixuan Li, and R Srikant. Principled detection of out-of-distribution examples in neural networks. arXiv preprint arXiv:1706.02690, 2017. \n[4] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017. \n[5] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. \n\nThanks,\nAuthors." ]
[ 6, 7, 6, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2018_ryiAv2xAZ", "iclr_2018_ryiAv2xAZ", "iclr_2018_ryiAv2xAZ", "iclr_2018_ryiAv2xAZ", "By_HQdCeG", "B1klq-5lG", "B1ja8-9lf" ]
iclr_2018_SkFAWax0-
VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop
We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models.
accepted-poster-papers
Meta score: 7 This paper presents a novel architecture for neural network based TTS using a memory buffer architecture. The authors have made good efforts to evaluate this system against other state-of-the-art neural TTS systems, although this is hampered by the need for re-implementation and the evident lack of optimal hyperparameters for e.g. tacotron. TTS is hard to evaluate against existing approaches, since it requires subjective user evaluation. But overall, despite its limtations, this is a good and interesting paper which I would like to see accepted Pros: - novel architecture - good experimentation on multiple databases - good response to reviewer comments - good results Cons: - some problems with the experimental comparison (baselines compared against) - writing could be clearer, and sometimes it feels like the authors are slightly overclaiming I take the point that this might be more suitable for a speech conference, but it seems to me that paper offers enough to the ICLR community for it to be worth accepting.
train
[ "HkP46XXlf", "rJW-33tlG", "Hy_P77pxM", "rJzre96bf", "SkY6kqpWM", "HkBPy9pZf", "SyGuaKpZG", "Hy9R2F6-M", "r1IFcs7xG", "SyrJ6XQgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "This is an interesting paper investigating a novel neural TTS strategy that can generate speech signals by sampling voices in the wild. The main idea here is to use a working memory with a shifting buffer. I also listened to the samples posted on github and the quality of the generated voices seems to be OK considering that the voices are actually sampled in the wild. Compared to other state-of-the-art systems like wavenet, deep voice and tacotron, the proposed approach here is claimed to be simpler and relatively easy to deploy in practice. Globally this is a good piece of work with solid performance. However, I have some (minor) concerns.\n\n1. Although the authors claim that there is no RNNs involved in the architectural design of the system, it seems to me that the working memory with a shifting buffer which takes the previous output as one of its inputs is a network with recurrence. \n\n2. Since the working memory is the key in the architectural design of VoiceLoop, it would be helpful to show its behavior under various configurations and their impact to the performance. For instance, how will the length of the running buffer affect the final quality of the voice? \n\n3. A new speaker's voice is generated by only providing the speaker's embedding vector to the system. This will require a large number of speakers in the training data in the first place to get the system learn the spread of speaker embeddings in the latent (embedding) space. What will happen if a new speaker's acoustic characteristics are obvious far away from the training speakers? For instance, a girl voice vs. adult male training speakers. In this case, the embedding of the girl's voice will show up in the sparse region of the embedding space of training speakers. How does it affect the performance of the system? It would be interesting to know. ", "This paper present the application of the memory buffer concept to speech synthesis, and additionally learns a \"speaker vector\" that makes the system adaptive and work reasonably well on \"in-the-wild\" speech data. This is a relevant problem, and a novel solution, but synthesis is a wicked problem to evaluate, so I am not sure if ICLR is the best venue for this paper. I see two competing goals:\n\n- If the focus is on showing that the presented approach outperforms other approaches under given conditions, a different task would be better (for example recognition, or some sort of trajectory reconstruction)\n- If the focus is on showing that the system outperforms other synthesis systems, then a speech oriented venue might be best (and it is unfortunate that optimized hyper-parameters for the other systems are not available for a fair comparsion)\n- If fair comparisons with the other appraoches cannot be made, my sense is that the multi-speaker (post-training fitting) option is really the most interesting and novel contribution here, which could be discussed in mroe detail\n\nStill, the approach is creative and interesting and deserves to be presented. I have a few questions/ suggestions:\n\nIntroduction\n\n- The link to Baddeley's \"phonological loop\" concept seems weak at best. There is nothing phonological about the features that this model stores and retrieves, and no evidence that the model behaves in a way consistent with \"phonologcial\" (or articulatory) assumptions or models - maybe best to avoid distracting the reader with this concept and strengthen the speaker adaptation aspect?\n- The memory model is not an RNN, but it is a recurrently called structure (as the name \"phonological loop\" also implies) - so I would also not highlight this point much\n- Why would the four properties of the proposed method (mid of p. 2, end of introduction: memory buffer, shared memory, shallow fully connected networks, and simple reader mechanism) lead to better robustness and improve performance on noisy and limited training data? Maybe the proposed approach works better for any speech synthesis task? Why specifically for \"in-the-wild\" data? The results in Table 2 show that the proposed system outperforms other systems on Blizzard 2013, but not Blizzard 2011 - does this support the previous argument?\n- Why not also evaluate MCD scores? This should be a quick and automatic way to diagnose what the system is doing? Or is this not meaningful with the noisy training data?\n\nPrevious work\n\n- Please introduce abbreviations the first time they are used (\"CBHG\" for example)\n- There is other work on using \"in-the-wild\" speech as well: Pallavi Baljekar and Alan W Black. Utterance Selection Techniques for TTS Systems using Found Speech, SSW 2016, Sunnyvale, USA Sept 2016\n\nThe architecture\n- Please explain the \"GMM\" (Gaussian Mixture Model?) attention mechanism in a bit more detail, how does back-propagation work in this case?\n- Why was this approach chosen? Does it promise to be robust or good for low data situations specifically?\n- The fonts in Figure 2 are very small, please make them bigger, and the Figure may not print well in b/w. Why does the mean of the absolute weights go up for high buffer positions? Is there some \"leaking\" from even longer contexts?\n- I don't understand \"However, human speech is not deterministic and one cannot expect [...] truth\". You are saying that the model cannot be excepted to reproduce the input exactly? Or does this apply only to the temporal distribution of the sequence (but not the spectral characteristics)? The previous sentence implies that it does. And how does teacher-forcing help in this case?\n- what type of speed is \"x5\"? Five times slower or faster than real-time?\n\nExperiments\n- Table 2: maybe mention how these results were computed, i.e. which systems use optimized hyper parameters, and which don't? How do these results support the interpretation of hte results in the introruction re in-the-wild data and found data?\n- I am not sure how to read Figure 4. Maybe it would be easier to plot the different phone sequences against each other and show how the timings are off, i.e. plot the time of the center of panel one vs the time of the center of panel 2 for the corresponding phone, and show how this is different from a straight line. Or maybe plot phones as rectangles that get deformed from square shape as durations get learned?\n- Figure 5: maybe provide spectrograms and add pitch contours to better show the effect of the dfifferent intonations? \n- Figure 4 uses a lot of space, could be reduced, if needed\n\nDiscussion\n- I think the first claim is a bit to broad - nowhere is it shown that the method is inherently more robust to clapping and laughs, and variable prosody. The authors will know the relevant data-sets better than I do, maybe they can simply extend the discussion to show that this is what happens. \n- Efficiency: I think Wavenet has also gotten much faster and runs in less than real-time now - can you expand that discussion a bit, or maybe give estimates in times of FLOPS required, rather than anecdotal evidence for systems that may or may not be comparable?\n\nConclusion\n- Now the advantage of the proposed model is with the number of parameters, rather than the computation required. Can you clarify? Are your models smaller than competing models?\n", "This paper studies the problem of text-to-speech synthesis (TTS) \"in the wild\" and proposes to use the shifting buffer memory. \n\nSpecifically, an input text is transformed to phoneme encoding and then context vector is created with attention mechanism. With context, speaker ID, previous output, and buffer, the new buffer representation is created with a shallow fully connected neural network and inserted into the buffer memory. Then the output is created by buffer and speaker ID with another fully connected neural network. A novel speaker can be adapted just by fitting it with SGD while fixing all other components.\n\nIn experiments, authors try single-speaker TTS and multi-speaker TTS along with speaker identification (ID), and show that the proposed approach outperforms baselines, namely, Tacotron and Char2wav. Finally, they use the challenging Youtube data to train the model and show promising results.\n\nI like the idea in the paper but it has some limitations as described below:\n\nPros:\n1. It uses relatively simple and less number of parameters by using shallow fully-connected neural networks. \n2. Using shifting buffer memory looks interesting and novel.\n3. The proposed approach outperforms baselines in several tasks, and the ability to fit to a novel speaker is nice. But there are some issues as well (see Cons.)\n\nCons:\n1. Writing is okay but could be improved. Some notations were not clearly described in the text even though it was in the table. \n2. Baselines. The paper says Deep Voice 2 (Arik et al., 2017a) is only prior work for multi-speaker TTS. However, it was not compared to. Also for multi-speaker TTS, in (Arik et al., 2017a), Tacotron (Wang et al., 2017) was used as a baseline but in this paper only Char2wav was employed as a baseline. Also for Youtube dataset, it would be great if some baselines were compared with like (Arik et al., 2017a).\n\n\nDetailed comment:\n1. To demonstrate the efficiency of the proposed model, it would be great to have the numbers of parameters for the proposed model and baseline models.\n2. I was not so clear about how to fit a new speaker and adding more detail would be good.\n3. Why do you think your model is better than VCTK test split, and even VCTK85 is better than VCTK101?", "Thank you for the suggestion to Fig. 5. The figure has been updated. \nFollowing the reviewer's suggestion for figure 5, the caption of figure 5 now reads: “Same input, different intonations. A single in the wild speaker saying the sentence ``priming is done like that '', where each time $S_0$ is initialized differently. (a) Without priming. (b) Priming with the word ``I\". (c) Priming with the word ``had''. (d) Priming with the word ``must''. (e) Priming with the word ``bye''. The figure shows the raw waveform, spectrogram, and F0 estimation (include voicedness) in the first, second and third rows respectively. From the spectrogram plots we can observe different duration for some phonemes. The F0 estimation of (c) and (d) shows that the speaker talks in higher tone while in (b) and (e) we can observe lower tone of the speaker. This demonstrates how priming changes the intonations of the model outputs.”\n\nFollowing the reviewer's suggestion, we have added details to the first paragraph of the discussion. Its last sentence now reads as follows\n“As our experiments show, our method is mostly robust to these, since it is able to model the voices despite of these difficulties and without replicating the background noises in the synthesized output. The baseline model of Char2Wav was not able to properly model the voices of the youtube dataset and presented clapping sounds in its output.”\n\nIn light of the importance of the wavenet model to the industry, considerable effort has been invested in speeding up the formidably slow inference-time of it. Most of these engineering efforts focus on eliminating redundancy and on efficient software/hardware utilization. A month after our submission, a new model “parallel wavenet” emerged, which has comparable run time to our (unoptimized) approach. Unlike our model, it is not based on attention (that requires sequential computation) but on linguistic features.\n \nRegarding the number of parameters, please see our response to AnonReviewer3. The number of parameters is similar to Multispeaker Tacotron, but our architecture is much simpler. It is considerably lower than that of DV2. As shown in the response to AnonReviewer3, we can compress the number of parameters by half and maintain a reasonable performance. The conclusions were slightly altered in order to reflect this.", "Thank you for pointing us to the missing reference. We have added a new text to the previous work section:\n“HMM-based methods require careful collection of the samples, or as recently attempted by~Baljekar et al., filtering of noisy samples for in-the-wild application.“\n \nFollowing the reviewer's suggestion, we have computed Mel cepstral distortion (MCD) scores. This is an automatic, albeit very limited, method of testing compatibility between two audio sequences. Since the sequences are not aligned, we employ MCD DTW, which uses dynamic time warping (DTW) to align the sequences. The results below correspond to Tab. 3 and 5 in the current paper (these were numbered 2 and 3 before). As can be seen, our method outperforms the baseline methods in this score as well, except for one single speaker experiment, where Tacotron achieves a lower distortion. As can be seen in the MOS data for the very same experiment, Tacotron is not really performing well in this experiment. These results are now added to the paper as Tab. 4 and 6.\n\n\n \tLJ\t Blizzard 2011 Blizzard 2013\nTacotron\t12.82+-1.41\t14.60+-7.02\t---\nChar2Wav\t19.41+-5.15\t13.97+-4.93\t18.72+-6.41\nVoiceLoop\t14.42+-1.39\t8.86+-1.22\t8.67+-1.26\n\n \tVCTK22\tVCTK65\tVCTK85\tVCTK101\nChar2Wav\t15.71+-1.82\t15.1+-1.45\t15.23+-1.49\t15.06+-1.32\nVoiceLoop\t13.74+-0.98\t14.1+-0.94\t14.16+-0.87\t14.22+-0.88\n\nFollowing the reviewer's request, we have added more text to the part of the paper that describes the attention mechanism, which is based on Gaussian Mixture Models. The added text reads:\n“The loss function of the entire model depends on the attention vector through this context vector. The GMM is differentiable with respect to mean, std and weight, and these are updated, during training, through backpropagation.”\n\nAs mentioned, the attention mechanism was selected since it's monotonic and since it was successfully employed in a pervious speech synthesis work. We have experimented with slightly modified versions since, which, for example, select the mixture component with the maximal probability instead of a weighted average. This seems to work somewhat better.\n\nFollowing the reviewer's suggestion, Figure 2 has been remade. The phenomena the reviewer noted, that the weights tend to increase at the end suggest that there might be valuable information beyond the memory horizon. However, as noted in our response to AnonReviewer1, longer buffers did not result in a noticeable improvement .\n\nRegarding the sentence on human speech. We simply meant that even the same speaker cannot replicate her voice to completely remove the MSE loss since there is variability that is present between every time a sentence is spoken. Teacher forcing solves this since it eliminates most of the drift and enforces a specific way of uttering the sentence. A clarification has been added to the paper as follows:\n“For example, even the same speaker cannot replicate her voice to completely remove the MSE loss since there is variability when repeating the same sentence. Teacher forcing solves this since it eliminates most of the drift and enforces a specific way of uttering the sentence.“\n\nBy 5x we mean 5 times faster than its CPU implementation, which is near real-time. This is now made clear in the paper. \n\nFor the query on which systems use optimized hyper parameters: the Tacotron reimplementations were optimized by the community to work best on each of the datasets given. For Char2Wav, we made the best effort to find the best hyper parameters for each dataset, including in the wild datasets. \n\nFollowing the review, we have added the following text to the paper:\n“The training of the Char2Wav model, in each experiment, was optimized by measuring the loss on the validation set, over the following hyperparameters: initial learning rate of [1e-2,1e-3, 1e-4], source noise standard deviation [1,2,4], batch-size [16,32,64] and the length of each training sample [10e2, 10e4]. “\n\nRegarding the clarity of Fig. 4, from the description we understand that Fig. 3 is questioned. The suggested visualizations are indeed suitable. However, the figure, as provided, has the advantage of depicting the actual (raw) probabilities. Before submitting the paper, we tried various other ways to visualize, most resulted in “less-scientific” or cluttered plots. \nFollowing the review, we added the 4th Mel-cepstrum coefficient for the three speakers. Each is compared against the ground-truth of the first speaker, illustrating further the differences between different speakers.", "We thank AnonReviewer2 for the constructive and thoughtful comments. Our method was designed, and our interests lie, specifically in Speech Synthesis. Applications of our methods to recognition and other tasks are of interest to us but not our focus. Unlike much of the previous work (Char2Wav is an exception) we publish our code and models , thus we allow and encourage a direct comparison with our method. The shortcomings of previous work cannot be held against us.\n\nMore generally, neural speech synthesis is an emerging topic in representation learning and generative models, both central to the ICLR community. There is sizable interest within the ICLR community in speech synthesis, which became in the last few years a fast paced field with a constant stream of new results. Char2Wav, SampleRNN, “Fast Generation for Convolutional Autoregressive Models” ICLR'17 are three recent examples, as well as many submissions to ICLR'18.\n\nWe are sorry that AnonReviewer2 was not convinced by the link to the phonological loop. We can of course remove this part without taking away nothing from the paper's clarity, technical novelty and experimental success. The link is said explicitly to be an inspiration only. \n\nHowever, the link to phonological loop fascinates us, and we observe in the model necessities that exist in Baddeley's model. \n1. By phonological, we don't mean information related necessarily to phonemes, as the review seems to imply. Rather, we mean a joint (mixed) representation, in memory, of sound based information and language based information, which is a unique characteristic of our model.\n2. The articulacy information in our model does not correspond to a physical model of the human vocal tract. Our model synthesizes a WAV using vocoder features and the network that generates vocoder features is our analog of an articulacy system. \n3. Other important aspects of the Baddeley's phonological loop include a short term memory, which we have, and a rehearsal mechanism. The analog to a rehearsal mechanism is the recursive way in which our buffer is updated. Namely, the new element in the buffer u is computed based on the entire buffer. We would like that stress that without this part, our model is completely ineffective, as noted in Sec. 3, description of step III.\n\nThe text of the discussion has been updated and it now reads as follows. If this is objectionable, we can remove the entire comparison to Baddeley's work.\n“The link we form to the model of Baddeley is by way of analogy and, to be clear, does not imply that we implement this model as is. Specifically, by phonological features, we mean a joint (mixed) representation, in memory, of sound based information and language based information, which is a unique characteristic of our model in comparison to previous work. The short term memory in Baddleley's model is analog to our buffer and the analog to the rehearsal mechanism is the recursive way in which our buffer is updated. Namely, the new element in the buffer (u) is calculated based on the entire buffer. As noted in Sec. 3, without this dependency on the buffer, our model becomes completely ineffective.“\n\nFollowing the reviewers' request, we have added more details to Sec. 3.2.\n\nRegarding our model belonging to the family of RNNs, we have addressed this in our reply to AnonReviewer1. We meant to highlight the differences from conventional and other existing RNN models, and this is not clarified.\n\nThe link between the model properties and the ability to fit speakers with less and lower-quality data is a hypothesis only (and presented as such). “In the wild” is an appealing application that is enabled by this capability, not the only one. The hypothesis above is based on the commonsense assumption that simplicity leads to robustness. Since our architecture employs a simple reader, a shared memory and shallow networks it is simpler than other architectures. We present this link as a hypothesis and do not test it directly since it is extremely hard to build a hybrid system (that has some of the properties) which works.\n\nOn Blizzard 2011, our results are better than Tacotron (reimplementation) but not significantly better than Char2Wav, while on Blizzard 2013 it is significantly better than both. This can be attributed to the clean nature of Blizzard 2011, for which Char2Wav is robust enough and therefore does support the robustness to noise claims. The text of the paper is updated to address this and now reads:\n“It is interesting to note that on Blizzard 2011, our results are better than Tacotron (reimplementation) but not significantly better than Char2Wav, while on Blizzard 2013 it is significantly better than both. This can be attributed to the clean nature of Blizzard 2011, for which Char2Wav is robust enough, and demonstrates our method's robustness to noise.“", "1. The proposed memory model is indeed a network with recurrence and, therefore, it is a Recurrent Neural Network. We meant to highlight the differences from conventional and other existing RNN models. We will clarify this point.\n\n2. In our experiments, buffer sizes between 15 and 30 seem to produce similar results. Less than 15 is detrimental. See also Fig. 2, which depicts the relative contribution of each column of the buffer and shows that all columns impact the computations done for the attention, the output, and the buffer update.\n\n3. We have not tested the specific experiment that is described but have done a similar experiment in which we train on voices of north Americans and fit on other accents. The results confirm that when fitting out of distribution voices, the quality degrades. However, the fitting is still successful in capturing the pitch and other aspects of the voice. \n\nThe authors.", "Thank you very much for your support and constructive feedback. As you noted, we have attempted to explain the details of the architecture both in a table and in the text. In a few places, details that appeared in the table were omitted from the text. This is now corrected.\n\nFurther baselines: reimplementing Deep Voice, which is a well engineered and complex system, is beyond our capabilities, and at the time of submission, beyond the capabilities of the open source community. Tacotron was converted, with a significant effort and partial success, by the authors (Arik et al. 2017a) into a multispeaker system. This, too, required an effort that is beyond our resources; the multi-speaker feature is also absent in the many reimplementations of Tacotron. Despite a considerable effort, we were not able to get Char2Wav to synthesize speech when trained on the YouTube dataset.\n\nTo the detailed comments 1--3\n==========================\n1. Below is a table describing the number of parameters in each approach.\nLoop: 9.3 * 10^6 (9,332,060)\nChar2Wav: 26.5 * 10^6 (26,494,492)\nDeepVoice 2: 29.6 * 10^6 (29,649,888) \nMultispeaker Tacotron: 9.2 * 10^6 (9,212,636) \n\nThis table does not reflect the relative simplicity of our method, since the number of parameters is hindered by fully connected layers and does not reflect, for example, the sophisticated nature of Tacotron's CBHG structures in comparison to our fully connected layers. In addition, we made, by the time of submission, no attempt to minimize the number of parameters. Since then, we were able to replicate our results with hidden layers of size (dk/30) instead of the arbitrary dk/10 we used in the paper. This results in a total number of parameters that is only a half of the number of parameters (5.7M) and in a small loss of performance. The MOS for this smaller model (Loop-bottleneck) are below.\n\nMulti Speaker\t vctk22 model\nChar2wav\t 2.78+-1.00\nLoop\t 3.54+-0.96\nGT\t 4.63+-0.63\nLoop - bottleneck\t 3.17+-0.98\n\n\n2. Following the reviewers' request, we have added more details to Sec. 3.2.\n\n\n3. The Top-1 identification scores are computed by a multi-classification speaker network. Better classification rates with generated samples is expected since the generated distribution lies closer to the training samples distribution than the test distribution. Also, since VCTK85 has only 85 classes, it is expected to perform marginally better than VCTK101, which has 101 classes.\n", "Thank you for your constructive comments.\n\n1. The proposed memory model is indeed a network with recurrence and, therefore, it is a Recurrent Neural Network. We meant to highlight the differences from conventional and other existing RNN models. We will clarify this point.\n\n2. In our experiments, buffer sizes between 15 and 30 seem to produce similar results. Less than 15 is detrimental. See also Fig. 2, which depicts the relative contribution of each column of the buffer and shows that all columns impact the computations done for the attention, the output, and the buffer update.\n\n3. We have not tested the specific experiment that is described but have done a similar experiment in which we train on voices of north Americans and fit on other accents. The results confirm that when fitting out of distribution voices, the quality degrades. However, the fitting is still successful in capturing the pitch and other aspects of the voice. \n\nThe authors.", "This is an interesting paper investigating a novel neural TTS strategy that can generate speech signals by sampling voices in the wild. The main idea here is to use a working memory with a shifting buffer. I also listened to the samples posted on github and the quality of the generated voices seems to be OK considering that the voices are actually sampled in the wild. Compared to other state-of-the-art systems like wavenet, deep voice and tacotron, the proposed approach here is claimed to be simpler and relatively easy to deploy in practice. Globally this is a good piece of work with solid performance. However, I have some (minor) concerns.\n\n1. Although the authors claim that there is no RNNs involved in the architectural design of the system, it seems to me that the working memory with a shifting buffer which takes the previous output as one of its inputs is a network with recurrence. \n\n2. Since the working memory is the key in the architectural design of VoiceLoop, it would be helpful to show its behavior under various configurations and their impact to the performance. For instance, how will the length of the running buffer affect the final quality of the voice? \n\n3. A new speaker's voice is generated by only providing the speaker's embedding vector to the system. This will require a large number of speakers in the training data in the first place to get the system learn the spread of speaker embeddings in the latent (embedding) space. What will happen if a new speaker's acoustic characteristics are obvious far away from the training speakers? For instance, a girl voice vs. adult male training speakers. In this case, the embedding of the girl's voice will show up in the sparse region of the embedding space of training speakers. How does it affect the performance of the system? It would be interesting to know. " ]
[ 8, 5, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkFAWax0-", "iclr_2018_SkFAWax0-", "iclr_2018_SkFAWax0-", "SkY6kqpWM", "HkBPy9pZf", "rJW-33tlG", "HkP46XXlf", "Hy_P77pxM", "SyrJ6XQgM", "iclr_2018_SkFAWax0-" ]