paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_ryQu7f-RZ
On the Convergence of Adam and Beyond
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
accepted-oral-papers
This paper analyzes a problem with the convergence of Adam, and presents a solution. It identifies an error in the convergence proof of Adam (which also applies to related methods such as RMSProp) and gives a simple example where it fails to converge. The paper then repairs the algorithm in a way that guarantees convergence without introducing much computational or memory overhead. There ought to be a lot of interest in this paper: Adam is a widely used algorithm, but sometimes underperforms SGD on certain problems, and this could be part of the explanation. The fix is both principled and practical. Overall, this is a strong paper, and I recommend acceptance.
test
[ "HkhdRaVlG", "H15qgiFgf", "Hyl2iJgGG", "BJQcTsbzf", "HJXG6sWzG", "H16UnjZMM", "ryA-no-zz", "HJTujoWGG", "ByhZijZfG", "SkjC2Ni-z", "SJXpTMFbf", "rkBQ_QuWf", "Sy5rDQu-z", "SJRh-9lef", "Bye7sLhkM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public" ]
[ "The paper presents three contributions: 1) it shows that the proof of convergence Adam is wrong; 2) it presents adversarial and stochastic examples on which Adam converges to the worst possible solution (i.e. there is no hope to just fix Adam's proof); 3) it proposes a variant of Adam called AMSGrad that fixes the problems in the original proof and seems to have good empirical properties.\n\nThe contribution of this paper is very relevant to ICLR and, as far as I know, novel.\nThe result is clearly very important for the deep learning community.\nI also checked most of the proofs and they look correct to me: The arguments are quite standard, even if the proofs are very long.\n\nOne note on the generality of the results: the papers states that some of the results could apply to RMSProp too. However, it has been proved that RMSProp with a certain settings of its parameters is nothing else than AdaGrad (see Section 4 in Mukkamala and Hein, ICML'17). Hence, at least for a certain setting of its parameters, RMSProp will converge. Of course, the proof in the ICML paper could be wrong, I did not check that...\n\nA general note on the learning rate: The fact that most of these algorithms are used with a fixed learning rate while the analysis assume a decaying learning rate should hint to the fact that we are not using the right analysis. Indeed, all these variants of AdaGrad did not really improve the AdaGrad's regret bound. In this view, none of these algorithms contributed in any meaningful way to our understanding of the optimization of deep networks *nor* they advanced in any way the state-of-the-art for optimizing convex Lipschitz functions.\nOn the other hand, analysis of SGD-like algorithms with constant step sizes are known. See, for example, Zhang, ICML'04 where linear convergence is proved in a neighbourhood of the optimal solution for strongly convex problems.\nSo, even if I understand this is not the main objective of this paper, it would be nice to see a discussion on this point and the limitations of regret analysis to analyse SGD algorithms.\n\nOverall, I strongly suggest to accept this paper.\n\n\nSuggestions/minor things:\n- To facilitate the reader, I would state from the beginning what are the common settings of beta_1 and beta_2 in Adam. This makes easier to see that, for example, the condition of Theorem 2 is verified.\n- \\hat{v}_{0} is undefined in Algorithm 2.\n- The graphs in figure 2 would gain in readability if the setting of each one of them would be added as their titles.\n- McMahan and Streeter (2010) is missing the title. (Also, kudos for citing both the independent works on AdaGrad)\n- page 11, last equation, 2C-4=2C-4. Same on page 13.\n- Lemma 4 contains x_1,x_2,z_1, and z_2: are x_1 and z_1 the same? also x_2 and z_2?", "This work identifies a mistake in the existing proof of convergence of\nAdam, which is among the most popular optimization methods in deep\nlearning. Moreover, it gives a simple 1-dimensional counterexample with\nlinear losses on which Adam does not converge. The same issue also\naffects RMSprop, which may be viewed as a special case of Adam without\nmomentum. The problem with Adam is that the \"learning rate\" matrices\nV_t^{1/2}/alpha_t are not monotonically decreasing. A new method, called\nAMSGrad is therefore proposed, which modifies Adam by forcing these\nmatrices to be decreasing. It is then shown that AMSGrad does satisfy\nessentially the same convergence bound as the one previously claimed for\nAdam. Experiments and simulations are provided that support the\ntheoretical analysis.\n\nApart from some issues with the technical presentation (see below), the\npaper is well-written.\n\nGiven the popularity of Adam, I consider this paper to make a very\ninteresting observation. I further believe all issues with the technical\npresentation can be readily addressed.\n\n\n\nIssues with Technical Presentation:\n\n- All theorems should explicitly state the conditions they require\n instead of referring to \"all the conditions in (Kingma & Ba, 2015)\".\n- Theorem 2 is a repetition of Theorem 1 (except for additional\n conditions).\n- The proof of Theorem 3 assumes there are no projections, so this\n should be stated as part of its conditions. (The claim in footnote 2\n that they can be handled seems highly plausible, but you should be up\n front about the limitations of your results.)\n- The regret bound Theorem 4 establishes convergence of the optimization\n method, so it plays the role of a sanity check. However, it is\n strictly worse than the regret bound O(sqrt{T}) for online gradient\n descent [Zinkevich,2003], so it cannot explain why the proposed\n AMSgrad method might be adaptive. (The method may indeed be adaptive\n in some sense; I am just saying the *bound* does not express that.\n This is also not a criticism of the current paper; the same remark\n also applies to the previously claimed regret bound for Adam.)\n- The discussion following Corollary 1 suggests that sum_i\n hat{v}_{T,i}^{1/2} might be much smaller than d G_infty. This is true,\n but we should always expect it to be at least a constant, because\n hat{v}_{t,i} is monotonically increasing by definition of the\n algorithm, so the bound does not get better than O(sqrt(T)).\n It is also suggested that sum_i ||g_{1:T,i}|| = sqrt{sum_{t=1}^T\n g_{t,i}^2} might be much smaller than dG_infty, but this is very\n unlikely, because this term will typically grow like O(sqrt{T}),\n unless the data are extremely sparse, so we should at least expect\n some dependence on T.\n- In the proof of Theorem 1, the initial point is taken to be x_1 = 1,\n which is perfectly fine, but it is not \"without loss of generality\",\n as claimed. This should be stated in the statement of the Theorem.\n- The proof of Theorem 6 in appendix B only covers epsilon=1. If it is\n \"easy to show\" that the same construction also works for other\n epsilon, as claimed, then please provide the proof for general\n epsilon.\n\n\nOther remarks:\n\n- Theoretically, nonconvergence of Adam seems a severe problem. Can you\n speculate on why this issue has not prevented its widespread adoption?\n Which factors might mitigate the issue in practice?\n- Please define g_t \\circ g_t and g_{1:T,i}\n- I would recommend sticking with standard linear algebra notation for\n the sqrt and the inverse of a matrix and simply using A^{-1} and\n A^{1/2} instead of 1/A and sqrt{A}.\n- In theorems 1,2,3, I would recommend stating the dimension (d=1) of\n your counterexamples, which makes them very nice!\n\nMinor issues:\n\n- Check accent on Nicol\\`o Cesa-Bianchi in bibliography.\n- Near the end of the proof of Theorem 6: I believe you mean Adam\n suffers a \"regret\" instead of a \"loss\" of at least 2C-4.\n Also 2C-4=2C-4 is trivial in the second but last display.\n", "This paper examines the very popular and useful ADAM optimization algorithm, and locates a mistake in its proof of convergence (for convex problems). Not only that, the authors also show a specific toy convex problem on which ADAM fails to converge. Once the problem was identified to be the decrease in v_t (and increase in learning rate), they modified the algorithm to solve that problem. They then show the modified algorithm does indeed converge and show some experimental results comparing it to ADAM.\n\nThe paper is well written, interesting and very important given the popularity of ADAM. \n\nRemarks:\n- The fact that your algorithm cannot increase the learning rate seems like a possible problem in practice. A large gradient at the first steps due to bad initialization can slow the rest of training. The experimental part is limited, as you state \"preliminary\", which is a unfortunate for a work with possibly an important practical implication. Considering how easy it is to run experiments with standard networks using open-source software, this can easily improve the paper. That being said, I understand that the focus of this work is theoretical and well deserves to be accepted based on the theoretical work.\n\n- On page 14 the fourth inequality not is clear to me.\n\n- On page 6 you talk about an alternative algorithm using smoothed gradients which you do not mention anywhere else and this isn't that clear (more then one way to smooth). A simple pseudo-code in the appendix would be welcome.\n\nMinor remarks:\n- After the proof of theorem 1 you jump to the proof of theorem 6 (which isn't in the paper) and then continue with theorem 2. It is a bit confusing.\n- Page 16 at the bottom v_t= ... sum beta^{t-1-i}g_i should be g_i^2\n- Page 19 second line, you switch between j&t and it is confusing. Better notation would help.\n- The cifarnet uses LRN layer that isn't used anymore.", "We thank the reviewer for very helpful and constructive feedback. \n\nAbout Mukkamala and Hein 2017 [MH17]: Thanks for pointing this paper. As the anonymous reviewer rightly points out, the [MH17] does not look at the standard version of RMSProp but rather a modification and thus, there is no contradiction with our paper. We will make this point clear in the final version of the paper.\n\nRegarding note about learning rate: While it is true that none of these new rates improve upon Adagrad rates, in fact, in the worst case one cannot improve the regret of standard online gradient descent in general convex setting. Adagrad improves this in the special case of sparse gradients (see for instance, Section 1.3 of Duchi et al. 2011). However, these algorithms, which are designed for specific convex settings, appear to perform reasonably well in the nonconvex settings too (especially in deep networks). Exponential moving average (EMA) variants seem to further improve the performance in the (dense) nonconvex setting. Understanding the cause for good performance in nonconvex settings is an interesting open problem. Our aim was to take an initial step to develop more principled EMA approaches. We will add a description in the final version of the paper.\n\nLemma 4: Thanks for pointing it out and sorry for the confusion. Indeed, x1 = z1 and x2 = z2. We have corrected this typo.\n\nWe have also revised the paper to address the minor typos mentioned in the review.", "We deeply appreciate the reviewer for a thorough and constructive feedback. \n\n- Theorem 2 & 3 are much more involved and hence the aim of Theorem 1 was to provide a simplified counter-example for a restrictive setting, thereby providing the key ideas of the paper.\n- We will emphasize your point about projections in the final version of the paper.\n- We agree that the role of Theorem 4 right now is to provide a sanity check. Indeed, it is not possible to improve upon the of online gradient descent in the worst case convex settings. Algorithms such as Adagrad exploit structure in the problem such as sparsity to provide improved regret bounds. Theorem 4 provides some adaptivity to sparsity of gradients (but note that these are upper bounds and it is not clear if they are tight). Adaptive methods seem to perform well in few non-sparse and nonconvex settings too. It remains open to understand it in the nonconvex settings of our interest. \n- Indeed, there is a typo; we expect ||g{1:T,i}|| to grow like sqrt(T). The main benefit in adaptive methods comes in terms of sparsity (and dimension dependence). For example see Section 1.3 in Duchi et al. 2011). We have revised the paper to incorporate these changes.\n- We can indeed assume that x_1 = 1 (without loss of generality) because for any choice of initial point, we can always translate the function so that x_1 = 1 is the initial point in the new coordinate system. We will add a discussion about this in the final version of the paper.\n- The last part of Theorem 6 explains the reduction with respect to general epsilon. We will further highlight this in the final version of the paper.\n\nOther remarks:\n\nRegarding widespread adoption of Adam: It is possible that in certain applications the issues we raised in this work are not that severe (although they can still lead to degradation in generalization performance). On the contrary, there exist a large number of real-world applications, for instance training models with large output spaces, which suffer from the issues we have highlighted and non-convergence has been observed to occur more frequently. Often, this non-convergence is attributed to nonconvexity but our paper shows one of the causes that applies even to convex settings. \nAs stated in the paper, using a problem specific large beta2 seems to help in some applications. Researchers have developed many tricks (such as gradient clipping) which might also play a role in mitigating these issues. We propose two different approaches to fix this issue and it will be interesting to investigate these approaches in various applications.\n\nWe have addressed all other minor concerns directly in the revision of the paper.\n", "Thanks David, for your interest in this paper and helpful comments (and pointers). We have addressed your concerns regarding typos in the latest revision of the paper.\n", "Thanks for your interest in our paper and for your feedback. We believe that beta1 is not an issue for convergence of Adam (although our theoretical analysis assumes a decreasing beta1). For example, in stochastic convex optimization, momentum based methods have been shown to converge even for constant beta1. That said, it is indeed interesting to develop better understanding of the effect of momentum in convergence of these algorithms (especially in the nonconvex setting).\n\nAs the paper shows, for any constant beta2, there exists a counter-example for non-convergence of Adam (both in online as well as stochastic setting, Theorem 2 & Theorem 3). Using a large beta2 can partially mitigate this issue in practice but it is not clear how high beta2 should be and it is indeed an interesting research question. Our paper proposes a couple of approaches (AMSGrad & AdamNC) for addressing these issues. AMSGrad allows us to use a fixed beta2 by changing the structure of the algorithm (and also allows us to use a much slow decaying learning rate than Adagrad). AdamNC looks at an approach where beta2 changes with t, ultimately converging to 1, hopefully allowing us to retain the benefits of Adam but at the same time circumventing its non-convergence.\n\nThe aim of synthetic experiments was to demonstrate the effect of non-convergence. We can modify it to demonstrate similar problem for any constant beta2.\n", "(1) Thanks for the interest in our paper and looking into the analysis carefully. We believe there is a misunderstanding regarding the proof. The third inequality follows from the lower bound v_{t+i-1} \\ge (1-\\beta)\\beta^{i-1}_2 C^2. The fourth inequality actually follows from the upper bound on v_{t+i-1} (which implicitly uses \\beta^{i'-1}_2 C^2 \\le 1). We revised the paper to provide the detailed derivation, including specifying precise constants that were previously omitted.\n\n(2) Actually, an easy observation from our analysis is that we can bound the regret of AMSGrad by O(G_infty sqrt(T)) as well. This can be easily seen from the proof of Lemma 2 where in the analysis the term \\sum_{t=1}^T |g_{t,i}/\\sqrt{t} can be bounded by O(G_infty sqrt(T)) instead of O(\\sqrt{\\log(T) ||g_{1:T}||_2). Thus, the regret of AMSGrad is upper bounded by minimum of O(G_infty sqrt(T)) and the bound presented in Theorem 4, and thus the worst case dependence on T is \\sqrt{T} rather than \\sqrt{T \\log(T)}. We will make this point in the final version of the paper.", "We thank the reviewer for the helpful and supportive feedback. The focus of the paper is to provide a principled understanding for the exponential moving average (EMA) adaptive optimization methods, which are now used as building blocks of many modern deep learning applications. The counter-example for non-convergence we show is very natural and is observed to arise in extremely sparse real-world problems (e.g., pertaining to problems with large output spaces). We provided two general directions to address the convergence issues in these algorithms (by either changing the structure of the algorithm or by gradually increasing beta2 as algorithm proceeds). We have provided preliminary experiments on a few commonly used networks & datasets but we do agree that a thorough empirical study will be very useful and is part of our future plan. \n\n- Fourth inequality on Page 14: We revised the paper to explain it further.\n- We will be happy to elaborate our comment about smoothed gradients in the final version of the paper.\n- We also addressed other minor suggestions.\n", "Dear authors,\n\nIt's a very good paper, but I have some questions as follows:\n\n(1) In the last paragraph on Page 14, it says the fourth inequality is from $\\beta^{i'-1}_2 C^2 \\le 1$, but I couldn't go through from the third inequality to the fourth inequality on Page 14. It seems that you applied the lower bound of $v_{t+i-1}$ (i.e. $v_{t+i-1} \\ge (1-\\beta)\\beta^{i-1}_2 C^2$ which is not desired) instead of its upper bound (which is truly required)? \n\n(2) In Corollary 1, from my understanding, the L2 norm of $g_{1:T,1}$ should be upper bounded by $\\sqrt(T)G_{\\inf}$, so the regret be $O(\\sqrt(T \\logT))$ instead of $O(\\sqrt(T))$ as stated in the remark of Corollary 1. \n\nCorrect me if I'm wrong. Thanks!", "Thanks for the inspiring paper. The observations are interesting and important!\n\nIt is easy to capture that exponential moving average might not able to capture the long-term memory of the gradients. \n\nThe paper is mainly focused on the beta2 that involving the averaging of second moment. It makes me wonder whether the beta1 on the averaging of the first moment gradient also suffer the similar problem. \n\nIt seems a direct solution would be using a large beta1 and beta2. (Always keep the maximum of the entire history seems is not the best solution and an average over a recent history might be a better alternative.) \n\nI did not carefully check the detail of the paper. But generally, one would have a similar concern I think. Could you explain the benefits of the proposed algorithm? \n\nThe synthetic experiments seem to use a relatively insufficient large beta2 regarding the large gradient gap, which makes it not able to capture the necessary long-term dependency. ", "The RMSProp used in Section 4 in Mukkamala and Hein, ICML'17 is not the standard RMSProp but a modification in which the parameter used for computing the geometrical averages of the gradient entries squared changes with time. So there is no contradiction with this paper, that shows counterexamples for the standard algorithm in which that parameter is constant. \n", "Congratulations for this paper, I really enjoyed it. It is a well written paper that contains an exhaustive set of counterexamples. I had also noticed that the proof of Adam was wrong and included it in my Master Thesis (https://damaru2.github.io/convergence_analysis_hypergradient_descent/dissertation_hypergradients.pdf Section 2.4) and I enjoyed reading through the paper and finding that indeed it was not just that the proof was wrong but that the method does not converge in general, not even in the stochastic case.\n\nI noticed some typos / minor things that seem that need to be fixed:\n\n+ In the penultimate line of page 16 there is this equality v_{t-1} = .... g_i. This g_i should be squared.\n\n+ In the following line, there is another square missing in a C, it should be (1-\\beta_{t-1}_2)(C^2 p + (1-p)) and there is a pair of parenthesis missing in the next term, it should be (1-\\beta_2^{t-1})((1+\\delta)C-\\delta)\n\n+ The fact that in Theorems 2 and 3 \\beta_2 is allowed to be 1 is confusing, since the method is not well defined if \\beta_2 is 1 (and you don't use an \\epsilon in the denominator. If you use an \\epsilon then with \\beta_1 = 0 the method is equivalent to SGD so it converges for a choice of alpha). In particular, in the proof of theorem 3 \\sqrt{1-\\beta_2} appears in some denominators and so does \\sqrt{\\beta_2} but there is no comment about what happens when this quantities are 0. There should be a quick comment on this or the \\beta_2 \\leq 1 should be removed from the theorems.\n\nBest wishes\n", "We thank you for your interest in our paper and for pointing out this missing detail. We use a decrease step size of alpha/sqrt(t) (as suggested by our theoretical analysis) for the stochastic optimization experiment. The use of decreasing step size leads to a more stable convergence to the optimal solution (especially in scenarios where the variance is reasonably high). We did not use epsilon in this particular experiment since the gradients are reasonably large (in other words, using a small epsilon like 1e-8 should produce more or less identical results). We will add these details in the next revision of our paper.", "Hello,\n\nI tried implementing AMSGrad (here: https://colab.research.google.com/notebook#fileId=1xXFAuHM2Ae-OmF5M8Cn9ypGCa_HHBgfG) for the experiment on the stochastic optimization setting and obtain that x_t approaches -1 faster that on the paper but convergence seems less stable, so I was wondering about the specific values for other hyperparameters like the learning rate and epsilon which weren't mentioned, in my case I chose a learning of 1e-3 and an epsilon of 1e-8 which seems to be the standard value on most frameworks." ]
[ 9, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "H15qgiFgf", "Sy5rDQu-z", "SJXpTMFbf", "SkjC2Ni-z", "Hyl2iJgGG", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "iclr_2018_ryQu7f-RZ", "Bye7sLhkM", "iclr_2018_ryQu7f-RZ" ]
iclr_2018_BJ8vJebC-
Synthetic and Natural Noise Both Break Neural Machine Translation
Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.
accepted-oral-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The paper is a first attempt to investigate an under-studied area in neural MT (and potentially other applications of sequence-to-sequence models as well) * This area might have a large impact; existing models such as Google Translate fail badly on the inputs described here * Experiments are very carefully designed and thorough * Experiments on not only synthetic but also natural noise add significant reliability to the results * Paper is well-written and easy to follow Cons: * There may be better architectures for this problem than the ones proposed here * Even the natural noise is not entirely natural, e.g. artificially constrained to exist within words * Paper is not a perfect fit to ICLR (although ICLR is attempting to cast a wide net, so this alone is not a critical criticism of the paper) This paper had uniformly positive reviews and has potential for large real-world impact.
train
[ "SJoXiUUNM", "SkABkz5gM", "BkQzs54VG", "BkVD7bqlf", "SkeZfu2xG", "SyTfeD5bz", "B1dT1vqWf", "HJ1vJDcZz", "HyRwAIqWf", "rJIbAd7-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your thoughtful response to my review.", "This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram character embeddings as word representations on the source side. The authors test these systems under a variety of noise conditions, including synthetic scrambling and keyboard replacements, as well as natural (human-made) errors found in other corpora and transplanted to the training and/or testing bitext via replacement tables. They show that all NMT systems, whether BPE or character-based, degrade drastically in quality in the presence of both synthetic and natural noise, and that it is possible to train a system to be resistant to these types of noise by including them in the training data. Unfortunately, they are not able to show any types of synthetic noise helping address natural noise. However, they are able to show that a system trained on a mixture of error types is able to perform adequately on all types of noise.\n\nThis is a thorough exploration of a mostly under-studied problem. The paper is well-written and easy to follow. The authors do a good job of positioning their study with respect to related work on black-box adversarial techniques, but overall, by working on the topic of noisy input data at all, they are guaranteed novelty. The inclusion of so many character-based systems is very nice, but it is the inclusion of natural sources of noise that really makes the paper work. Their transplanting of errors from other corpora is a good solution to the problem, and one likely to be built upon by others. In terms of negatives, it feels like this work is just starting to scratch the surface of noise in NMT. The proposed meanChar architecture doesn’t look like a particularly good approach to producing noise-resistant translation systems, and the alternative solution of training on data where noise has been introduced through replacement tables isn’t extremely satisfying. Furthermore, the use of these replacement tables means that even when the noise is natural, it’s still kind of artificial. Finally, this paper doesn’t seem to be a perfect fit for ICLR, as it is mostly experimental with few technical contributions that are likely to be impactful; it feels like it might be more at home and have greater impact in a *ACL conference.\n\nRegarding the artificialness of their natural noise - obviously the only solution here is to find genuinely noisy parallel data, but even granting that such a resource does not yet exist, what is described here feels unnaturally artificial. First of all, errors learned from the noisy data sources are constrained to exist within a word. This tilts the comparison in favour of architectures that retain word boundaries (such as the charCNN system here), while those systems may struggle with other sources of errors such as missing spaces between words. Second, if I understand correctly, once an error is learned from the noisy data, it is applied uniformly and consistently throughout the training and/or test data. This seems worse than estimating the frequency of the error and applying them stochastically (or trying to learn when an error is likely to occur). I feel like these issues should at least be mentioned in the paper, so it is clear to the reader that there is work left to be done in evaluating the system on truly natural noise.\n\nAlso, it is somewhat jarring that only the charCNN approach is included in the experiments with noisy training data (Table 6). I realize that this is likely due to computational or time constraints, but it is worth providing some explanation in the text for why the experiments were conducted in this manner. On a related note, the line in the abstract stating that “... a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise” implies that the other (non-charCNN) architectures could not learn these representations, when in reality, they simply weren’t given the chance.\n\nSection 7.2 on the richness of natural noise is extremely interesting, but maybe less so to an ICLR audience. From my perspective, it would be interesting to see that section expanded, or used as the basis for future work on improve architectures or training strategies.\n\nI have only one small, specific suggestion: at the end of Section 3, consider deleting the last paragraph break, so there is one paragraph for each system (charCNN currently has two paragraphs).\n\n[edited for typos]", "The CFP clearly states that \"applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field\" are relevant.", "This paper empirically investigates the performance of character-level NMT systems in the face of character-level noise, both synthesized and natural. The results are not surprising:\n\n* NMT is terrible with noise.\n\n* But it improves on each noise type when it is trained on that noise type.\n\nWhat I like about this paper is that:\n\n1) The experiments are very carefully designed and thorough.\n\n2) This problem might actually matter. Out of curiosity, I ran the example (Table 4) through Google Translate, and the result was gibberish. But as the paper shows, it’s easy to make NMT robust to this kind of noise, and Google (and other NMT providers) could do this tomorrow. So this paper could have real-world impact.\n\n3) Most importantly, it shows that NMT’s handling of natural noise does *not* improve when trained with synthetic noise; that is, the character of natural noise is very different. So solving the problem of natural noise is not so simple… it’s a *real* problem. Speculating, again: commercial MT providers have access to exactly the kind of natural spelling correction data that the researchers use in this paper, but at much larger scale. So these methods could be applied in the real world. (It would be excellent if an outcome of this paper was that commercial MT providers answered it’s call to provide more realistic noise by actually providing examples.)\n\nThere are no fancy new methods or state-of-the-art numbers in this paper. But it’s careful, curiosity-driven empirical research of the type that matters, and it should be in ICLR.", "This paper investigates the impact of noisy input on Machine Translation, and tests simple ways to make NMT models more robust.\n\nOverall the paper is a clearly written, well described report of several experiments. It shows convincingly that standard NMT models completely break down on both natural \"noise\" and various types of input perturbations. It then tests how the addition of noise in the input helps robustify the charCNN model somewhat. The extent of the experiments is quite impressive: three different NMT models are tried, and one is used in extensive experiments with various noise combinations.\n\nThis study clearly addresses an important issue in NMT and will be of interest to many in the NLP community. The outcome is not entirely surprising (noise hurts and training and the right kind of noise helps) but the impact may be. I wonder if you could put this in the context of \"training with input noise\", which has been studied in Neural Network for a while (at least since the 1990s). I.e. it could be that each type of noise has a different regularizing effect, and clarifying what these regularizers are may help understand the impact of the various types of noise. Also, the bit of analysis in Sections 6.1 and 7.1 is promising, if maybe not so conclusive yet.\n\nA few constructive criticisms:\n\nThe way noise is included in training (sec. 6.2) could be clarified (unless I missed it) e.g. are you generating a fixed \"noisy\" training set and adding that to clean data? Or introducing noise \"on-line\" as part of the training? If fixed, what sizes were tried? More information on the experimental design would help.\n\nTable 6 is highly suspect: Some numbers seem to have been copy-pasted in the wrong cells, eg. the \"Rand\" line for German, or the Swap/Mid/Rand lines for Czech. It's highly unlikely that training on noisy Swap data would yield a boost of +18 BLEU points on Czech -- or you have clearly found a magical way to improve performance.\n\nAlthough the amount of experiment is already important, it may be interesting to check whether all se2seq models react similarly to training with noise: it could be that some architecture are easier/harder to robustify in this basic way.\n\n[Response read -- thanks]\nI agree with authors that this paper is suitable for ICLR, although it will clearly be of interest to ACL/MT-minded folks.", "1. We believe that the topic on noise in NMT is of interest to the ICLR audience. Please see our response to reviewer 1 for a detailed explanation. \n\n2. We find that both solutions we offered are effective to a reasonable extent. meanChar works fairly well on scrambling types of noise, but fails on other noise, as expected. Adversarial training with noise works well as long as train/test noise types are matched, so it’s a useful practical technique that can be applied in NMT systems, as pointed out by reviewer 1. \n", "Thank you for the useful feedback. \n\n1. We agree that the topic has real-world impact for MT providers and will emphasize this in the conclusions. \n\n2. We would love to see MT providers use noisy data and we agree that the community would benefit from access to more noisy examples. \n", "Thank you for the useful feedback. We agree that noisy input in neural machine translation is an under-studied problem. \n\nResponses to specific comments:\n1. We agree that our work only starts to scratch the surface of noise in NMT and believe there’s much more to be done in this area. We do believe that it’s important to initiate a discussion of this issue in the ICLR community, for several reasons: (a) we study word and character representations for NMT, which is in line with the ICLR representation learning theme; (b) ICLR audience is very interested in neural machine translation and seminal work on NMT has been published in ICLR (e.g., Bahdanau et al.’s 2015 paper on attention in NMT); (c) ICLR audience is very interested in noise and adversarial examples, as evidenced by the plethora of recent papers on the topic. As reviewer 1 says, even though there are no fancy new methods in the paper, we believe that this kind of research belongs in ICLR.\n\n2. We agree that meanChar may not be the ideal architecture for capturing noise, but it’s a simple, structure-invariant representation that works reasonably well. We have tried several other architectures, including a self-attention mechanism, but haven’t been able to improve beyond it. We welcome more suggestions and can include those negative results in new drafts of the paper.\n\n3. Training with noise has its limitations, but it’s an effective method that can be employed by NMT providers and researchers easily and impactfully, as pointed out by reviewer 1. \n\n4. In this work, we focus on word-level noise. Certainly, sentence-level noise is also important to learn, and we’d like to see more work on this. We’ll add this as another direction for future work. Note that while charCNN may have some advantage in dealing with word-level noise, it too suffers from increasing amounts of noise, similar to the other models we studied.\n\n5. Applying noise stochastically based on frequency in available corpora is an interesting suggestion, that can be done for the natural noise, but not so clear how to apply for synthetic noise. We did experiment with increasing amounts of noise (Figure 1), but we agree there’s more to be done. We’ll add this as another future work. \n\n6. (To both reviewer 2 and 3) Regarding training other seq2seq models with noise: Our original intent was to test the robustness of pre-trained state-of-the-art models, but we also considered retraining them in this noisy paradigm. There are a number of design decisions that are involved here (e.g. should the BPE dictionary be built on the noisy texts and how should thresholds be varied?). That being said, we can investigate training using published parameter values, but worry these may be wholly inappropriate settings for the new noisy data.\n\n7. We’ll modify the abstract to not give the wrong impression regarding what other architectures can learn. \n\n8. We included section 7.2 to demonstrate why synthetic noise is not very helpful in dealing with natural noise, as well as to motivate the development of better architectures. \n\n9. We’ll correct the other small issues pointed to. \n", "Thank you for the constructive feedback. \n1. Noise setup: when training with noise, we replace the original training set with a new, noisy training set. The noisy training set has exactly the same number of sentences and words as the training set, but noise is introduced according to the description in Section 4. Therefore, we have one fixed noisy training set per each noise type. We’ll clarify the experimental design in the paper. \n\n2. We had not thought to explore the relationship between the noise we are introducing as a corruption of the input and the training under noise paradigm you referenced. We might be mistaken, but normally, the corruption (e.g. Bishop 95) is in the form of small additive gaussian noise. It isn’t immediately clear to us whether discrete perturbation of the input like we have here is equivalent, but would love suggestions on analyses we might do to investigate this insight further.\n\n3. Some cells in the mentioned rows in Table 6 were indeed copied from the French rows by error. We corrected the numbers and they are in line with the overall trends. Thank you for pointing this out. The corrected Czech numbers are in the 20s and the best performing system is the Rand+Key+Real setting.\n\n4. (To both reviewer 2 and 3) Regarding training other seq2seq models with noise: Our original intent was to test the robustness of pre-trained state-of-the-art models, but we also considered retraining them in this noisy paradigm. There are a number of design decisions that are involved here (e.g. should the BPE dictionary be built on the noisy texts and how should thresholds be varied?). That being said, we can investigate training using published parameter values, but worry these may be wholly inappropriate settings for the new noisy data.", "The paper points out the lack of robustness of character based models and explores a few, very basic solutions, none of which are effective. While starting a discussion around this problem is valuable, the paper provides no actually working solutions, and the solutions explored are very basic from a machine learning point of view. This publication is better suited to a traditional NLP venue such as ACL/EMNLP." ]
[ -1, 7, -1, 7, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "HJ1vJDcZz", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "iclr_2018_BJ8vJebC-", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "BkVD7bqlf", "SkABkz5gM", "SkeZfu2xG", "iclr_2018_BJ8vJebC-" ]
iclr_2018_Hk2aImxAb
Multi-Scale Dense Networks for Resource Efficient Image Classification
In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.
accepted-oral-papers
As stated by reviewer 3 "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer." As stated by reviewer 2 "My only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). ". The authors assert novelty in the sense that they provide a solution to improve computational efficiency and focus on this aspect of the problem. Overall, the technical innovation is not huge, but I think this could be a very useful idea in practice.
train
[ "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz", "Hy_75oomz", "HkJRFjomf", "HJiXYjjQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This work proposes a variation of the DenseNet architecture that can cope with computational resource limits at test time. The paper is very well written, experiments are clearly presented and convincing and, most importantly, the research question is exciting (and often overlooked). \n\nMy only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). The authors add a hierarchical, multi-scale structure and show that DenseNet can better cope with it than ResNet (e.g., Fig. 3). They investigate pros and cons in detail adding more valuable analysis in the appendix. However, this work is basically an extension of the DenseNet approach with a new problem statement and additional, in-depth analysis. \n\nSome more minor comments: \n\n-\tPlease enlarge Fig. 4. \n-\tI did not fully grasp the details in the first \"Solution\" paragraph on P5. Please extend and describe in more detail. \n\nIn conclusion, this is a very well written paper that designs the network architecture (of DenseNet) such that it is optimized to include CPU budgets at test time. I recommend acceptance to ICLR18.\n \n\n\n", "This paper presents a method for image classification given test-time computational budgeting constraints. Two problems are considered: \"any-time\" classification, in which there is a time constraint to evaluate a single example, and batched budgets, in which there is a fixed budget available to classify a large batch of images. A convolutional neural network structure with a diagonal propagation layout over depth and scale is used, so that each activation map is constructed using dense connections from both same and finer scale features. In this way, coarse-scale maps are constructed quickly, then continuously updated with feed-forward propagation from lower layers and finer scales, so they can be used for image classification at any intermediate stage. Evaluations are performed on ImageNet and CIFAR-100.\n\nI would have liked to see the MC baselines also evaluated on ImageNet --- I'm not sure why they aren't there as well? Also on p.6 I'm not entirely clear on how the \"network reduction\" is performed --- it looks like finer scales are progressively dropped in successive blocks, but I don't think they exactly correspond to those that would be needed to evaluate the full model (this is \"lazy evaluation\"). A picture would help here, showing where the depth-layers are divided between blocks.\n\nI was also initially a bit unclear on how the procedure described for batched budgeted evaluation achieves the desired result: It seems this relies on having a batch that is both large and varied, so that its evaluation time will converge towards the expectation. So this isn't really a hard constraint (just an expected result for batches that are large and varied enough). This is fine, but could perhaps be pointed out if that is indeed the case.\n\nOverall, this seems like a natural and effective approach, and achieves good results.\n", "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer. The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image. The multi-scale representation allows for better performance at early stages of the network. Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers.\nA thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation.\n\nPros:\n- The presentation is clear and easy to follow.\n- The structure of the network is clearly justified in section 4.\n- The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting.\n- The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios.\n- Results are very promising, with 5x speed-ups and same or better accuracy that previous models.\n- The extensive experimentation shows that the proposed network is better than previous approaches under different regimes.\n\nCons:\n- Results about the more efficient densenet* could be shown in the main paper\n\nAdditional Comments:\n- Why in training you used logistic loss instead of the more common cross-entropy loss? Has this any connection with the final performance of the network?\n- In fig. 5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT\n- In fig. 5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results\n- How the budget in terms of Mul-Adds is actually estimated?\n\nI think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them. The experimental evaluation is complete and accurate. \n\n", "Thanks for positive comments. \n\n# difference to DenseNet\nAlthough dense connectivity is one of the two key components in our MSDNet, this paper is quite different from the original DenseNet paper: (1) in this paper we tackle a very different problem, the inference of deep models with computational resource limits at test time; (2) we show the multi-scale features are crucial for learning accurate early classifiers. Finally, MSDNet yields 2x to 5x faster inference speed than DenseNet under the batch budgeted setting.\n\n# minors\nThanks for these suggestions. We have incorporated them in the updated version.", "Thanks for the positive comments.\n\n# MC baselines on ImageNet\nWe exclude these results in our current version as we observed that they are far from competitive on both CIFAR-10 and CIFAR-100. We are testing the MC baselines on ImageNet, and will include it in a later version, but won’t expect them to be strong baselines.\n\n# network reduction\nThe ‘network reduction’ is a design choice to reduce redundancy in the network, while ‘lazy evaluation’ is a strategy to avoid redundant computations. We have added a figure (Figure 9) in the appendix to illustrate the reduced network as suggested. \n\n# batched budgeted evaluation\nThanks for pointing out. We have emphasize that the notion of budget in this context is a “soft constraint” given a large batch of testing samples.", "Thank you for the encouraging comments! \n\n# DenseNet*\nWe have included the DenseNet* results in the main paper as suggested. We placed this network originally in the appendix to keep the focus of the main manuscript on the MSDNet architecture, and it was introduced for the first time in this paper (although as a competitive baseline).\n\n# logistic loss\nWe actually used the cross entropy loss in our experiments. We have fixed this sentence. Thanks for pointing out.\n\n# DenseNet^MC and ResNet^MC on ImageNet (left panel of Fig.5)\nWe observed that DenseNet^MC and ResNet^MC are two of the weakest baselines on both CIFAR-10 and CIFAR-100 datasets. Therefore, we thought their results on ImageNet probably won’t add much to the paper. We can add these results in a later version.\n\n# improvements in the anytime setting\nIt should be 4% and 8% higher accuracy when the budget ranges from 0.1x10^10* to 0.3x10^10* FLOPs. We have corrected it in the updated version.\n\n# actually budget\nFor many devices, e.g., ARM processor, the actual inference time is basically a linear function of the number of Mul-Add operations. Thus in practice, given a specific device, we can estimate the budget in terms of Mul-Add according to the real time budget." ]
[ 8, 7, 10, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz" ]
iclr_2018_HJGXzmspb
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``"WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
accepted-oral-papers
High quality paper, appreciated by reviewers, likely to be of substantial interest to the community. It's worth an oral to facilitate a group discussion.
train
[ "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM", "HJ7oecRZf", "r1t-e5CZf", "ryW51cAbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a method to train neural networks with low precision. However, it is not clear if this work obtains significant improvements over previous works. \n\nNote that:\n1)\tWorking with 16bit, one can train neural networks with little to no reduction in performance. For example, on ImageNet with AlexNet one gets 45.11% top-1 error if we don’t do anything else, and 42.34% (similar to the 32-bit result) if we additionally adjust the loss scale (e.g., see Boris Ginsburg, Sergei Nikolaev, and Paulius Micikevicius. “Training of deep networks with halfprecision float.” NVidia GPU Technology Conference, 2017). \n2)\tImageNet with AlexNet top-1 error (53.5%) in this paper seems rather high in comparison to previous works. Specifically, DoReFA and QNN, which used mostly lower precision (k_W=1, k_A=2 and k_E=6, k_G=32) one can get much lower performance (47% and 49%, respectively). So, the main innovation here, in comparison, is k_G=12.\n3)\tComparison using other datasets is made with different architectures then previous works, so it is hard to quantify what is the contribution of the proposed method. For example, on MNIST, the authors use a convolutional neural network, while BC and BNN used a fully connected neural network (the so called “permutation invariant mnist” problem).\n4)\tCifar performance is good, but may seem less remarkable, given that “Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework” already showed that k_G=k_W=k_A=2, k_E=32 is sufficient to get 7.5% error on CIFAR. So the main novelty, in comparison, is that k_E=12.\n\nTaking all the above into account, it hard to be sure whether the proposed methods meaningfully improve existing methods. Moreover, I am not sure if decreasing the precision from 16bit to 12bit (as was done on ImageNet) is very useful for hardware applications, especially if there is such a degradation in accuracy. If, for example, the authors would have demonstrated all-8bit training on all datasets with little performance degradation, this would seem much more useful.\n\nMinor: there are some typos that should be corrected, e.g.: “Empirically, We demonstrates” in abstract.\n\n%%% Following the authors response %%%\nThe authors have improved their results and have addressed my concerns. I therefore raised my scores.\n\n", "The authors describe a method called WAGE, which quantize all operands and operators in a neural network, specifically, the weights (W), activations (A), gradients (G), and errors (E) . The idea is using quantizers with clipping (denoted in the paper with Q(x,k)) and some additional operators like shift (denoted with shift(x)) and stochastic rounding. The main motivation of the authors in this work is to reduce the number of bits for representation in a network for all the WAGE operations and operands which influences the power consumption and silicon area in hardware implementations.\n\nAfter introducing the idea and related work, the authors in Section 3 give details about how to perform the quantization. They introduce the additional operators needed for training in such network. Since quantization may loss some information, the authors need to quantize the signals in the network around the dynamic range in order not to \"kill\" the signal. The authors describe how to do that. Afterward, as in other techniques for quantization, they describe how to initialize the network values. Also, they argue that batch normalization in this network is replaced with the shift-quantize operations, and what is matter in this case is (1) the relative values (“orientations”) and not the absolute values and (2) small values in errors are negligible.\n\nAfterward, the authors conduct experiments on MNIST, SVHN, CIFAR10, and ILSVRC12 datasets, where they show promising results compared to the errors provided by previous works. The WAGE parameters (i.e., the quantized no. of bits used) are 2-8-8-8, respectively. For understand more the WAGE, the authors compare on CIFAR10 the test error rate with vanilla CNN and show is small loss in using their network. The authors investigate mainly the bitwidth of errors and gradients.\n\nIn overall, this paper is an accept since it shows good performance on standard problems and invent some nice tricks to implement NN in hardware, for *both* training and inference. For inference only, other works has more to offer but this is a promising technique for learning. The things that are still missing in this work are some power reduction estimates as well as area reduction estimations. This will give the hardware community a clear vision of how such methods may be implemented both in data centers as well as on end portable devices. \n", "The authors propose WAGE, which discretized weights, activations, gradients, and errors at both training and testing time. By quantization and shifting, SGD training without momentum, and removing the softmax at output layer as well, the model managed to remove all cumbersome computations from every aspect of the model, thus eliminating the need for a floating point unit completely. Moreover, by keeping up to 8-bit accuracy, the model performs even better than previously proposed models. I am eager to see a hardware realization for this method because of its promising results. \n\nThe model makes a unified discretization scheme for 4 different kinds of components, and the accuracy for each of the kind becomes independently adjustable. This makes the method quite flexible and has the potential to extend to more complicated networks, such as attention or memory. \n\nOne caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet. Given the number of bits each of the WAGE components asked for, a 28.5% top 5 error rate seems even lower than XNOR. I suspect it is due to the fact that gradients and errors need higher accuracy for real-valued input, but if that is the case, accuracies on SVHN and CIFAR-10 should also reflect that. Or, maybe it is due to hyperparameter setting or insufficient training time?\n\nAlso, dropout seems not conflicting with the discretization. If there are no other reasons, it would make sense to preserve the dropout in the network as well.\n\nIn general, the paper was written in good quality and in detail, I would recommend a clear accept.\n", "We sincerely appreciate the reviewer for the comments, which indeed helps us to improve the quality of this paper. \n\nIn our revised manuscript, we keep the last layer in full precision for ImageNet task (both BNN and DoReFa keep the first and the last layer in full precision). Our results have been improved from 53.5/28.6 with 28CC to 51.7/28.0 with 2888 bits setting. Results of other patterns are updated in Table4. We have now revised the paper accordingly and would like to provide point-by-point response on how these comments have been addressed:\n\n(1) Working with 16bit, one can train neural networks with little to no reduction in performance.\n\nWe introduce a thorough and flexible approach (from AnonReviewer3) towards training DNNs with fixed-point (8bit) integers, so there is no floating-point operands or operations in both inference and training phases. This is the key difference between our work and the previous works. As shown in Table5 in the revised manuscript, 5x reduction of energy and area costs can be achieved in this way, which we believe will greatly benefit the application of our method especially in mobile devices.\n\n(2) ImageNet with AlexNet top-1 error (53.5%) in this paper seems rather high in comparison to previous works.\n\nThe significant differences between WAGE and existing works (DoReFa, QNN, BNN) lie in that:\n\n 1. WAGE does not need to store real-valued weights (DoReFa, QNN, BNN need).\n 2. WAGE calculates both gradients and errors with 8-bit integers (QNN, BNN use float32).\n 3. Many of the techniques, say for example, batch normalization and Adam optimizer that are hard to be \n implemented on mobile devices are avoided by WAGE. \n\nThrough experiments, we find that, if we store real-valued weights and do not quantize back propagation, the performance on ImageNet is at the same level (although not the same specification) as that of DoReFa, QNN and BNN. Please refer to more detailed results in Table4.\n\n(3) Comparison using other datasets is made with different architectures then previous works\n\nPlease refer to the comparison between TWN and WAGE in Table1 where we show a better result with the same CNN architecture. \n\n(4) Cifar performance is good, but may seem less remarkable.\n\nIn fact, k-E is set as 8 in WAGE. Gated-XNOR uses a batch size of 1000 and totally trains for 1000 epochs, so the total training time and memory consumption are unsatisfactory. Besides, they use float32 to calculate gradients and errors, and batch normalization layer is kept to guarantee the convergence.\n\n(5) If, for example, the authors would have demonstrated all-8bit training on all datasets\n\nIn our experiments, we find that it is necessary to set k-G>k-W, otherwise the updates of weights will directly influence the forward propagation and cause instability. Most of the previous works store real-valued weights (32-bits k-G), so they meet this restriction automatically. By considering this comment, we focus on 2-8-8-8 training and the results for ImageNet are updated in Table1 and Table4. \n", "We thank the reviewer for the constructive suggestion:\n\n(1) The things that are still missing in this work are some power reduction estimates as well as area reduction estimations.\n\nWe have taken this suggestion and added Table5 in Discussion, and made a rough estimation. \n\nFor future work, we have tapped out our neuromorphic processors lately using phase-change memory to store weights and designed the ability to do some on-chip and on-site learning. The processor has 8-bit weights and 8-bit activation without any floating-point design. The real power consumption and area reduction of the processor has been simulated and estimated. It is very promising to implement some interesting application with continual learning demands on that chip as an end portable device.\n", "We thank the reviewer for the insightful comments. Please find our responses to individual questions below:\n\n(1) One caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet ...\n\nIn our revised manuscript, we keep the last layer in full precision for ImageNet task (BNN and DoReFa kept both the first and the last layer), the accuracy for 2-8-8-8 is 51.7/28.0 compared to original results 53.5/28.6 with 2-8-C-C bits setting. Results of other patterns are updated in Table4.\n\nWe find that the Softmax layer in the AlexNet model and 1000 categories jointly cause the conflictions. Since we make no exception for the first or the last layer, weights in the last layer will be limited to {-0.5,0,+0.5} and scaled by Equation(8), so the outputs of the last layer also obey a normal distribution N(0,1). The problem is that these values are small for a Softmax layer with 1000 categories. \n\nExample: \n\nx1=[0,0,0,…,1] (one-hot 1000 dims)\ny1=Softmax(x1)=[9.9e-4, 9.9e-4, …, 2.7e-3]\ne1 = z – x1, still a long way to train\nx2=[0, 0, 0,…,8] (one-hot 1000 dims)\ny2=Softmax(x2)=[1e-4, 1e-4, …, 0.75]\ne2 = z – x2, much closer to the label now\nlabel=z=[0,0,0,…,1].\n\nIn this case, we observe that 80% weights in the last FC layer are trained greedily to {+0.5} to magnify the outputs. Therefore, the last layer would be a bottleneck for both inference and backpropagation. That might be why previous works do not quantize the last layer. The experiments on CIFAR10 and SVHN did not use Softmax cross-entropy and had only 10 categories, which indicates no accuracy drop. \n\n\n(2)Also, dropout seems not conflicting with the discretization...\n\nYes, it is an additional method to alleviate over-fitting. Because we are working on designing a new neuromorphic computing chip, dropout will make the pipeline of weights and MAC calculations a little bit weird. Anyone who has no concern of that can easily add dropout to the WAGE graph.\n\n\t\n" ]
[ 7, 7, 8, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM" ]
iclr_2018_HJGv1Z-AW
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
accepted-oral-papers
Important problem (analyzing the properties of emergent languages in multi-agent reference games), a number of interesting analyses (both with symbolic and pixel inputs), reaching a finding that varying the environment and restrictions on language result in variations in the learned communication protocols (which in hindsight is that not surprising, but that's hindsight). While the pixel experiments are not done with real images, it's an interesting addition the literature nonetheless.
train
[ "HJ3-u2Ogf", "H15X_V8yM", "BytyNwclz", "S1XPn0jXG", "r1QdpPjXf", "SJWDw1iXG", "ryjhESdQG", "S1GjVrOmz", "rJylbvSzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "--------------\nSummary:\n--------------\nThis paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide rich analysis of the emergent languages the agents produce under different experimental conditions. This analysis (especially on raw pixel images) make up the primary contribution of this work.\n\n\n--------------\nEvaluation:\n--------------\nOverall I think the paper makes some interesting contributions with respect to the line of recent 'language emergence' papers. The authors provide novel analysis of the learned languages and perceptual system across a number of environmental settings, coming to the (perhaps uncontroversial) finding that varying the environment and restrictions on language result in variations in the learned communication protocols. \n\nIn the context of existing literature, the novelty of this work is somewhat limited -- consisting primarily of the extension of multi-agent reference games to raw-pixel inputs. While this is a non-trivial extension, other works have demonstrated language learning in similar referring-expression contexts (essentially modeling only the listener model [Hermann et.al 2017]). \n\nI have a number of requests for clarification in the weaknesses section which I think would improve my understanding of this work and result in a stronger submission if included by the authors. \n\n--------------\nStrengths:\n--------------\n- Clear writing and document structure. \n\n\n- Extensive experimental setting tweaks which ablate the information and regularity available to the agents. The discussion of the resulting languages is appropriate and provides some interesting insights.\n\n\n- A number of novel analyses are presented to evaluate the learned languages and perceptual systems. \n\n\n--------------\nWeaknesses:\n--------------\n- How stable are the reported trends / languages across multiple runs within the same experimental setting? The variance of REINFORCE policy gradients (especially without a baseline) plus the general stochasticity of SGD on randomly initialized networks leads me to believe that multiple training runs of these agents might result is significantly different codes / performance. I am interested in hearing the author's experiences in this regard and if multiple runs present similar quantitative and qualitative results. I admit that expecting identical codes is unrealistic, but the form of the codes (i.e. primarily encoding position) might be consistent even if the individual mappings are not).\n\n\n- I don't recall seeing descriptions of the inference-time procedure used to evaluate training / test accuracy. I will assume argmax decoding for both speaker and listener. Please clarify or let me know if I missed something.\n\n\n- There is ambiguity in how the \"protocol size\" metric is computed. In Table 1, it is defined as 'the effective number of unique message used'. This comes back to my question about decoding I suppose, but does this count the 'inference-time' messages or those produced during training? \nFurthermore, Table 2 redefines \"protocol size\" as the percentage of novel message. I assume this is an editing error given the values presented and take these columns as counts. It also seems \"protocol size\" is replaced with the term \"lexicon\" from 4.1 onward.\n\n- I'm surprised by how well the agents generalize in the raw pixel data experiments. In fact, it seems that across all games the test accuracy remains very close to the train accuracy. \n\nGiven the dataset is created by taking all combinations of color / shape and then sampling 100 location / floor color variations, it is unlikely that a shape / color combo has not been seen in training. Such that the only novel variations are likely location and floor color. However, taking Game A as an example, the probe classifiers are relatively poor at these attributes -- indicating the speaker's representation is not capturing these attributes well. Then how do the agents effectively differentiate so well between 20 images leveraging primarily color and shape?\n\nI think some additional analysis of this setting might shed some light on this issue. One thought is to compute upper-bounds based on ground truth attributes. Consider a model which knows shape perfectly, but cannot predict other attributes beyond chance. To compute the performance of such a model, you could take the candidate set, remove any instances not matching the ground truth shape, and then pick randomly from the remaining instances. Something similar could be repeated for all attributes independently as well as their combinations -- obviously culminating in 100% accuracy given all 4. It could be that by dataset construction, object location and shape are sufficient to achieve high accuracy because the odds of seeing the same shape at the same location (but different color) is very low. \n\nGiven these are operations on annotations and don't require time-consuming model training, I hope to see this analysis in the rebuttal to put the results into appropriate context.\n\n\n- What is random chance for the position and floor color probe classifiers? I don't think it is mentioned how many locations / floor colors are used in generation. \n\n\n- Relatively minor complaint: Both agents are trained via the REINFORCE policy gradient update rule; however, the listener agent makes a fairly standard classification decision and could be trained with a standard cross-entropy loss. That is to say, the listener policy need not make intermediate discrete policy decisions. This decision to withhold available supervision is not discussed in the paper (as far as I noticed), could the authors speak to this point?\n\n\n\n--------------\nCuriosities:\n--------------\n- I got the impression from the results (specifically the lack of discussion about message length) that in these experiments agents always issued full length messages even though they did not need to do so. If true, could the authors give some intuition as to why? If untrue, what sort of distribution of lengths do you observe?\n\n- There is no long term planning involved in this problem, so why use reinforcement learning over some sort of differentiable sampler? With some re-parameterization (i.e. Gumbel-Softmax), this model could be end-to-end differentiable.\n\n\n--------------\nMinor errors:\n--------------\n[2.2 paragraph 1] LSTM citation should not be in inline form.\n[3 paragraph 1] 'Note that these representations do care some' -> carry\n[3.3.1 last paragraph] 'still able comprehend' --> to\n\n\n-------\nEdit\n-------\nUpdating rating from 6 to 7.", "This paper presents a set of studies on emergent communication protocols in referential games that use either symbolic object representations or pixel-level representations of generated images as input. The work is extremely creative and packed with interesting experiments.\n\nI have three main comments.\n\n* CLARITY OF EXPOSITION\n\nThe paper was rather hard to read. I'll provide some suggestions for improvement in the minor-comments section below, but one thing that could help a lot is to establish terminology at the beginning, and be consistent with it throughout the paper: what is a word, a message, a protocol, a vocabulary, a lexicon? etc.\n\n* RELATION BETWEEN VOCABULARY SIZE AND PROTOCOL SIZE\n\nIn the compositional setup considered by the authors, agents can choose how many basic symbols to use and the length of the \"words\" they will form with these symbols. There is virtually no discussion of this interesting interplay in the paper. Also, there is no information about the length distribution of words (in basic symbols), and no discussion of whether the latter was meaningful in any way.\n\n* RELATION BETWEEN CONCEPT-PROPERTY AND RAW-PIXEL STUDIES\n\nThe two studies rely on different analyses, and it is difficult to compare them. I realize that it would be impossible to report perfectly comparable analyses, but the authors could at least apply the \"topographic\" analysis of compositionality in the raw-pixel study as well, either by correlating the CNN-based representational similarities of the Speaker with its message similarities, or computing similarity of the inputs in discretized, symbolic terms (or both?).\n\n* MINOR/DETAILED COMMENTS\n\nSection 1\n\nHow do you think emergent communication experiments can shed light on language acquisition?\n\nSection 2\n\nIn figure 1, the two agents point at nothing.\n\n\\mathbf{v} is a set, but it's denoted as a vector. Right below that, h^S is probably h^L?\n\nall candidates c \\in C: or rather their representations \\mathbf{v}?\n\nGive intuition for the reward function.\n\nSection 3\n\nWe use the dataset of Visual Attributes...: drop \"dataset\"\n\nI think the pre-linguistic objects are not represented by 1-hot, but binary vectors.\n\ndo care some inherent structure: carry\n\nNote that symbols in V have no pre-defined semantics...: This is repeated multiple times.\n\nSection 3\n\nI couldn't find simulation details: how many training elements, and how is training accuracy computed? Also, \"training data\", \"training accuracy\" are probably misleading terms, as I suppose you measured performance on new combinations of objects.\n\nI find \"Protocol Size\" to be a rather counterintuitive term: maybe call Vocabulary Size \"Alphabet Size\", and Protocol Size \"Lexicon Size\"?\n\nState in Table 1 caption that the topographic measure will be explained in a later section. Also, the -1 is confusing: you can briefly mention when you introduce the measure that since you correlate a distance with a similarity you expect an inverse relation? Also, you mention in the caption that all Spearman rhos are significant, but where are they presented again?\n\nSection 3.2\n\nDoes the paragraph starting with \"Note that the distractor\" refer to a figure or table that is not there? If not, it should be there, since it's not clear what are the data that support your claims there. Also, you should explain what the degenerate strategy the agents find is.\n\nNext paragraph:\n\n- I find the usage of \"obtaining\" to refer to the relation between messages and objects strange.\n\n- in which space are the reported pairwise similarities computed?\n\n- make clear that in the non-uniform case confusability is less influenced by similarity since the agents must learn to distinguish between similar objects that naturally co-occur (sheep and goats)\n\n- what is the expected effect on the naturalness of the emerged language?\n\nSection 3.3\n\nadhere to, the ability to: \"such as\" missing?\n\nIs the unigram chimera distribution inferred from the statistics over the distribution of properties across all concepts or what? (please clarify.)\n\nIn Tables 2 and 3, why is vocabulary size missing?\n\nIn Table 2, say that the protocol size columns report novel message percentage **for the \"test\" conditions***\n\nFigure 2: spelling of Levensthein\n\nSection 3.3.2\n\nwhile for languages (c,d)... something missing.\n\nwith a randomly initialized...: no a\n\nMore importantly, I don't understand this \"random\" setup: if architecture was fixed and randomly initialized, how could something be learned about the structure of the data?\n\nSection 4\n\nRefer to the images the agents must communicate about as \"scenes\", since objects are just a component of them.\n\nWhat are the absolute sizes of train and test splits?\n\nSection 4.1\n\nwe do not address this issue: the issue\n\nSection 4.2\n\nat least in the game C&D: games\n\nWhy is Appendix A containing information that logically follows that in Appendix B?\n", "This paper presents an analysis of the communication systems that arose when neural network based agents played simple referential games. The set up is that a speaker and a listener engage in a game where both can see a set of possible referents (either represented symbolically in terms of features, or represented as simple images) and the speaker produces a message consisting of a sequence of numbers while the listener has to make the choice of which referent the speaker intends. This is a set up that has been used in a large amount of previous work, and the authors summarize some of this work. The main novelty in this paper is the choice of models to be used by speaker and listener, which are based on LSTMs and convolutional neural networks. The results show that the agents generate effective communication systems, and some analysis is given of the extent to which these communications systems develop compositional properties – a question that is currently being explored in the literature on language creation.\n\nThis is an interesting question, and it is nice to see worker playing modern neural network models to his question and exploring the properties of the solutions of the phone. However, there are also a number of issues with the work.\n\n1. One of the key question is the extent to which the constructed communication systems demonstrate compositionality. The authors note that there is not a good quantitative measure of this. However, this is been the topic of much research of the literature and language evolution. This work has resulted in some measures that could be applied here, see for example Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf\n\n2. In general the results occurred be more quantitative. In section 3.3.2 it would be nice to see statistical tests used to evaluate the claims. Minimally I think it is necessary to calculate a null distribution for the statistics that are reported.\n\n3. As noted above the main novelty of this work is the use of contemporary network models. One of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.\n\nOverall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here.", "We would like to thank all the reviewers for their thoughtful and detailed feedback. We particularly thank them for recognizing that this is an interesting piece of work.\n\nWe have now revised our manuscript to address the concerns raised by the reviewers, hopefully producing a stronger and clearer submission. The most significant changes are:\n\n\t(as asked by AnonReviewer3)\n* We have added statistical tests (permutation test) to support claims regarding the results of the topographic similarity\n* We have added 2 sentences in the abstract and conclusion to make clear our contributions on extending work in the language evolution literature to contemporary DL materials.\n\n\t(as asked by AnonReviewer2)\n* We have added in the Appendix C a new experiment on communicative success on the raw pixel data with models operating on gold attribute classifiers\n* We have added a comment about instability of REINFORCE affecting nature of protocols in same of the experimental setups of Section 4\n\n\t(as asked by AnonReviewer1)\n* We made all the requested clarifications (thanks again for the detailed review)\n* Added Figure 2 to visually illustrate the claims in Section 3.2\n* Added topographic similarity measurements for Section 4 (Table 3) which strengthen the findings of the qualitative analysis of game A producing structurally consistent messages.\n", "Thanks for the clarifications, and looking forward to the revised paper.", "We would like to thank the reviewer for their review. We found their comments extremely helpful and we are in the process of updating the manuscript accordingly. We will upload the revised paper tomorrow. In the meantime, we respond here to the major comments.\n\n\n<review>\n* CLARITY OF EXPOSITION\n</review>\nWe will introduce the terminology together with the description of the game.\n\n<review>\n* RELATION BETWEEN VOCABULARY SIZE AND PROTOCOL SIZE\n</review>\nWithout any explicit penalty on the length of the messages (Section 2), agents are not motivated to produce shorter messages (despite the fact that as the reviewer points, agents can decide to do so) since this constrains the space of messages (and thus the possibility of the speaker and listener agreeing on a successful naming convention), opting thus to always make use of the maximum possible length. When we introduced a penalty on the length of the message (Section 3), agents produced shorter messages for the ambiguous messages since this strategy maximizes the total expected reward.\n\n\n<review>\n* RELATION BETWEEN CONCEPT-PROPERTY AND RAW-PIXEL STUDIES\n</review>\nThanks for the suggestion. Correlating CNN-based representations with message similarities would not yield any new insight since these representations are the input to the message generation process. However, we ran the analysis on the symbolic representations of the images (location cluster, color, shape, floor color cluster) and the messages and found that the topographic similarities of the games are ordered as follows (in parentheses we report the topographic $\\rho$): game A (0.13) > game C (0.07) > game D (0.06) > game B (0.006).\nThis ordering is in line with our qualitative analysis of the protocols presented in Section 4.1.\n\n<review>\nFigures/Tables for \"Note that the distractor\"paragraph and degenerate strategy.\n</review>\nWe will include in the manuscript the training curves that this paragraph refers to.\nThe degenerate strategy is that of picking a target at random from the topically relevant set of distractors, thus reducing the effective size of distractors.\n\n<review>\n\"random\" setup...\n</review>\nDespite the fact that the weights of the networks are random, since the message generation is a parametric process, similar inputs will tend to generate similar outputs, thus producing messages that retain (at least to some small degree) the structure of the input data, despite the fact that there is no learning at all.\n", "\n<review>\nrandom chance of probe classifiers.\n</review>\nWhen generating the dataset, we sample locations and floor colors from a continuous scale. For the probe classifiers, we quantize location by clustering each coordinate in 5 clusters (and thus accuracy is reported by averaging the performance of the x and y probe classifiers with chance being at 20% for each co-ordinate) and floor colors in 3 clusters (with chance being at 33%). We will include the chance levels in Table 4.\n\n<review>\nWhy not use cross-entropy loss for listener?\n</review>\nWe decided to train both agents via REINFORCE for symmetry. Given the nature of the listener’s choice, we don’t anticipate full supervision to have an effect other than speeding up learning.\n\n\n<review>\nWhat about message length?\n</review>\nWithout any explicit penalty on the length of the messages (Section 2), agents are not motivated to produce shorter messages (despite the fact that as the reviewer points, agents can decide to do so) since this constrains the space of messages (and thus the possibility of the speaker and listener agreeing on a successful naming convention). When we introduced a penalty on the length of the message (Section 3), agents produced shorter messages for the ambiguous messages (since this strategy maximizes the total expected reward).\n\n<review>\nWhy use reinforcement learning over some sort of differentiable sampler?\n</review>\nWhile a differentiable communication channel would make learning faster, it goes against the basic and fundamental principles of human communication (and also against how this phenomenon is studied in language evolution). Simply put, having a differentiable channel would mean in practice that speakers can back-propagate through listeners’ brains (which unfortunately is not the case in real life :)) We wanted to stay as close as possible to this communication paradigm, thus using a discrete communication channel.", "We thank the reviewer for their thorough review. We respond to the comments raised while we are in the process of making the necessary changes in the manuscript.\n\n<review>\nHow stable are results?\n</review>\nOverall, results with REINFORCE in these non-stationary multi-agent environments (where speakers and listeners are learning at the same time) show instability, and -- as expected -- some of the experimental runs did not converge. However, we believe that the stability of the nature of the protocol (rather than its existence) is mostly influenced by the configuration of the game itself, i.e., how constrained the message space is. As an example, games C & D impose constraints on the nature of the protocol since location encoding location on the messages is not an acceptable solution -- on runs that we had convergence, the protocols would always communicate about color. The same holds for game A (position is a very good strategy since it uniquely identifies objects combined with the environmental pressure of many distractors). However, game B is more unconstrained in nature and the converged protocols were more varied. We will include a discussion of these observations in the updated manuscript.\n\n<review>\nInference time procedure\n</review>\nThe reviewer is correct. At training time we sample, at test time we argmax. We will clarify this.\n\n<review>\nProtocol size vs lexicon\n</review>\nThank you for pointing this out. We will clarify the terminology.\nProtocol size (or lexicon -- we will remove this term and use protocol size only) is the number of invented messages (sequences of symbols).\nIn Table 1, we report the protocol size obtained with argmax on training data.\nIn Table 2, we report the number of novel messages, i.e., messages that were not generated for the training data, on 100 novel objects.\n\n<review>\nGeneralization on raw pixel data -- training and test accuracy are close\n</review>\nThis observation is correct. By randomly creating train and test splits, chances are that the test data contain objects of seen color and shape combination but unseen location. Neural networks (and any other parametric model) do better in these type of “smooth” generalizations caused by a continuous property like location.\n\n<review>\nHowever, taking Game A as an example, the probe classifiers are relatively poor at these attributes -- indicating the speaker's representation is not capturing these attributes well. \nThen how do the agents effectively differentiate so well between 20 images leveraging primarily color and shape?\n</review>\nIn Game A, agents differentiate 20 objects leveraging primarily object position rather than color and shape.\nIn Game A, the listener needs to differentiate between 20 objects, and so, communicating about color and shape is not a good strategy as there are chances that there will be some other red cube, for example, on the distractor list. The probe classifiers are performing relatively poorly on these attributes (especially on the object color) whereas they perform very well on position (which is in fact a good strategy), which as we find by our analysis is what the protocol describes. We note that location is a continuous variable (which we discretize only for performing the probe analysis in Section 4.2) and so it is very unlikely that two objects have the same location, thus uniquely identifying objects among distractors. This is not the case for games C & D since the listener sees a variation of the speaker’s target.\nMoreover, we note, that object location is encoded across all games.\n\n<review>\nUpper-bound analysis based on ground truth attributes.\n</review>\nWe agree with the reviewer that an upper-bound analysis relying on gold information of objects will facilitate the exposition of results. Note that since location is a continuous variable, ground truth of location is not relevant.\n\t\tcolor \tshape color & shape\nA\t\t0.37\t 0.24\t0.80\nB & C \t0.93\t 0.90\t0.98\nD\t\t0.89\t 0.89\t0.98\n\nWe could perform the same analysis by discretizing the location in the same way we performed the probe analysis in Section 4.2, however, the upper-bound results depend on the number of discrete locations we derive.\n\t\tlocation\t color & location\tshape & location\nA\t\t0.69\t\t 0.95\t\t\t0.92\nB \t\t0.97\t\t 0.99\t\t\t0.99\n(for C and D results for location are not applicable)\n\n\n\n", "We thank the reviewer for their comments.\nFor replying, we copy-paste the relevant part and comment on it.\n\n<review> 1. One of the key question ... Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf\" \n</review>\n\nWe agree with the reviewer that there are good existing measures. Our point was only that there is no mathematical definition and hence no definitive measure. In fact, we do include such a measure found in the literature on language evolution. Our topographic similarity measure (which is introduced by Brighton & Kirby (2006)) is in line with the measure introduced in 2.2.3 in Carr et al.. In Carr et al, the authors correlate Levenshtein message distances and triangle dissimilarities (as obtained from humans). In our study, we correlate Levenshtein message distances and object dissimilarities as obtained by measuring cosine distance of the object feature norms (which are produced by humans). We will make sure to make this connection to previous literature explicit in our description of the measure.\n\n<review>\n2. In general the results occurred be more quantitative....statistics that are reported.\n</review>\n\nWe agree with the reviewer that statistical tests are important, and we politely point out that our claims on 3.3.2 are in fact based on the reported numbers in Table 1 “topographic ρ” column. However, we will evaluate the statistical significance of the “topographic ρ” measure by calculating the null distribution via a repeated shuffling of the Levenshtein distances (or an additional test if the reviewer has an alternative suggestion).\n\n<review>\n3. As noted above the main novelty of this work is the use of contemporary network models\n</review>\n\nWe believe the novelty of this work is to take the well-defined and interesting questions that the language evolution literature has posed and try to scale them up to contemporary deep learning models and materials, i.e., realistic stimuli in terms of objects and their properties (see Section 3), raw pixel stimuli (see Section 4) and neural network architectures (see Section 2). This kind of interdisciplinary work can not only inform current models on their strengths and weaknesses (as we note in Section 4 we find that neural networks starting from raw pixels cannot out-of-the-box process easily stimuli in a compositional way), but also open up new possibilities for language evolution research in terms of more realistic model simulations. We believe that this might not have been clear from the manuscript and will update the abstract and conclusion to reflect the premises of the work.\n\n<review>\nOne of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.\n</review>\n\nMore complex image stimuli and realistic simulations is where we are heading. However, we (as a community) first need to understand how these models behave with raw pixels before scaling them up to complex stimuli. The nature of this work was to lay the groundwork on this question and investigate the properties of protocols in controlled (yet realistic in terms of nature) environments where we can tease apart clearly the behaviour of the model given the small number of variations of the pixel stimuli (object color/shape/position and floor color). Performing the type of careful analysis we did for complex scenes is substantially harder due to the very large number of factors we would have to control (diverse objects of multiple colors, shapes, sizes, diverse backgrounds etc) so it puts into question to what degree we could have achieved a similar degree of introspection by immediately using more complex datasets in the current study.\n\n<review>\nOverall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here.\n</review>\n\nWe will upload an updated version of our paper by the end of this week containing \n1) the statistical test of the null distribution \n2) clarifications regarding the topographic measure and \n3) we will clarify the main contributions of this work and better relate it to the existing literature in language evolution\n\nMoreover, we would be really happy to conduct further analyses and clarify the exposition of results. If the reviewer has specific suggestions on this, we would like to hear them in order to improve the quality of the manuscript and strengthen our submission. \n" ]
[ 7, 9, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "SJWDw1iXG", "H15X_V8yM", "HJ3-u2Ogf", "HJ3-u2Ogf", "BytyNwclz" ]
iclr_2018_Hkbd5xZRb
Spherical CNNs
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective. In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression.
accepted-oral-papers
This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelling tasks such as 3d shape recognition and molecular energy prediction. Reviewers agreed this is a solid, well-written paper, which demonstrates the usefulness of group invariance/equivariance beyond the standard Euclidean translation group in real-world scenarios. It will be a great addition to the conference.
train
[ "r1VD9T_SM", "r1rikDLVG", "SJ3LYkFez", "B1gQIy9gM", "Bkv4qd3bG", "r1CVE6O7f", "Sy9FmTuQM", "ryi-Q6_Xf", "HkZy7TdXM", "S1rz4yvGf" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "How to describe the relationships between these two papers?", "Thank you for the feedback; I maintain my opinion.", "Summary:\n\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns in spherical signals irrespective of how they are rotated on the sphere. The key is to make the convolutional architecture rotation equivariant.\n\nPros:\n\n+ novel/original proposal justified both theoretically and empirically\n+ well written, easy to follow\n+ limited evaluation on a classification and regression task is suggestive of the proposed approach's potential\n+ efficient implementation\n\nCons:\n\n- related work, in particular the first paragraph, should compare and contrast with the closest extant work rather than merely list them\n- evaluation is limited; granted this is the nature of the target domain\n\nPresentation:\n\nWhile the paper is generally written well, the paper appears to conflate the definition of the convolutional and correlation operators? This point should be clarified in a revised manuscript. \n\nIn Section 5 (Experiments), there are several references to S^2CNN. This naming of the proposed approach should be made clear earlier in the manuscript. As an aside, this appears a little confusing since convolution is performed first on S^2 and then SO(3). \n\nEvaluation:\n\nWhat are the timings of the forward/backward pass and space considerations for the Spherical ConvNets presented in the evaluation section? Please provide specific numbers for the various tasks presented.\n\nHow many layers (parameters) are used in the baselines in Table 2? If indeed there are much less parameters used in the proposed approach, this would strengthen the argument for the approach. On the other hand, was there an attempt to add additional layers to the proposed approach for the shape recognition experiment in Sec. 5.3 to improve performance?\n\nMinor Points:\n\n- some references are missing their source, e.g., Maslen 1998 and Kostolec, Rockmore, 2007, and Ravanbakhsh, et al. 2016.\n\n- some sources for the references are presented inconsistency, e.g., Cohen and Welling, 2017 and Dieleman, et al. 2017\n\n- some references include the first name of the authors, others use the initial \n\n- in references to et al. or not, appears inconsistent\n\n- Eqns 4, 5, 6, and 8 require punctuation\n\n- Section 4 line 2, period missing before \"Since the FFT\"\n\n- \"coulomb matrix\" --> \"Coulomb matrix\"\n\n- Figure 5, caption: \"The red dot correcpond to\" --> \"The red dot corresponds to\"\n\nFinal remarks:\n\nBased on the novelty of the approach, and the sufficient evaluation, I recommend the paper be accepted.\n\n", "The focus of the paper is how to extend convolutional neural networks to have built-in spherical invariance. Such a requirement naturally emerges when working with omnidirectional vision (autonomous cars, drones, ...).\n\nTo get invariance on the sphere (S^2), the idea is to consider the group of rotations on S^2 [SO(3)] and spherical convolution [Eq. (4)]. To be able to compute this convolution efficiently, a generalized Fourier theorem is useful. In order to achieve this goal, the authors adapt tools from non-Abelian [SO(3)] harmonic analysis. The validity of the idea is illustrated on 3D shape recognition and atomization energy prediction. \n\nThe paper is nicely organized and clearly written; it fits to the focus of ICLR and can be applicable on many other domains as well.\n", "First off, this paper was a delight to read. The authors develop an (actually) novel scheme for representing spherical data from the ground up, and test it on three wildly different empirical tasks: Spherical MNIST, 3D-object recognition, and atomization energies from molecular geometries. They achieve near state-of-the-art performance against other special-purpose networks that aren't nearly as general as their new framework. The paper was also exceptionally clear and well written.\n\nThe only con (which is more a suggestion than anything)--it would be nice if the authors compared the training time/# of parameters of their model versus the closest competitors for the latter two empirical examples. This can sometimes be an apples-to-oranges comparison, but it's nice to fully contextualize the comparative advantage of this new scheme over others. That is, does it perform as well and train just as fast? Does it need fewer parameters? etc.\n\nI strongly endorse acceptance.", "Thank you for the kind words, we're glad you like our work! \n\nOur models for SHREC17 and QM7 both use only about 1.4M parameters. On a machine with 1 Titan X GPU, training the SHREC17 model takes about 50 hours, while the QM7 model takes only about 3 hours. Memory usage is 8GB for SHREC (batchsize 16) and 7GB for QM7 (batchsize 20).\n\nWe have studied the SHREC17 paper [1], but unfortunately it does not state the number of parameters or training time for the various methods. It does seem likely that each of the competition participants did their own cross validation, and arrived at an appropriate model complexity for their method. It is thus unlikely that the strong performance of our model relative to others can be explained by its size (especially since 1.4M parameters is not considered very large anymore).\n\nFor QM7, it looks like Montavon et al. used about 760k parameters (we have deduced this from the description of their network architecture). Since the model is a simple multi-layer perceptron applied to a hand-designed feature representation, we expect that it is substantially faster to train than our model (though indeed comparing a spherical CNN to an engineered features+MLP approach is a bit of an apples-to-oranges comparison). Raj et al. use a non-parametric method, so there is no parameter count or training time to compare to.\n\n[1] M. Savva et al. SHREC’17 Track Large-Scale 3D Shape Retrieval from ShapeNet Core55, Eurographics Workshop on 3D Object Retreival (2017).", "Thank you for the detailed and balanced review.\n\nRE Related work: we have expanded the related work section a little bit in order to contrast with previous work. (Unfortunately there is no space for a very long discussion)\n\nRE Convolution vs correlation: thank you for pointing this out. Our reasoning had been that:\n1) Everybody in deep learning uses the word \"convolution\" to mean \"cross-correlation\".\n2) In the non-commutative case, there are several different but essentially equivalent convolution-like integrals that one can define, with no really good reason to prefer one over the other.\n\nBut we did not explain this properly. We think a reasonable approach is to call something group convolution if, for the translation group it specializes to the standard convolution, and similarly for group correlations. This seems to be what several others before us have done as well, so we will follow this convention. Specifically, we will define the (group) cross-correlation as:\n psi \\star f(g) = int psi(g^{-1} h) f(h) dh.\n\nRE The S^2CNN name: we have now defined this term in the introduction, but not changed it, because the paper is called \"Spherical CNN\" and S^2-CNN is just a shorthand for that name.\n\nRE Timings: we have added timings, memory usage numbers, and number of parameters to the paper. It is not always possible to compare the number of parameters to related work because those numbers are not always available. However, we can reasonably assume that the competing methods did their own cross-validation to arrive at an optimal model complexity for their architecture. (Also, in deep networks, the absolute number of parameters can often vary widely between architectures that have a similar generalization performance, making this a rather poor measure of model complexity.)\n\nRE References and other minor points: we have fixed all of these issues. Thanks for pointing them out.", "Thank you very much for taking the time to review our work.", "Thank you for these references, they are indeed very relevant and interesting*. We will add them and change the text.\n\nWe agree that the cross-correlation is the right term, and have fixed it in the paper. We have added further discussion of this issue in reply to reviewer 2, who raised a similar concern.\n\n* We do not have access to Rafaely's book through our university library, so we cannot comment on it.\n", " In page 5: \"This says that the SO(3)-FT of the S2 convolution (as we have defined it) of two spherical signals can be computed by taking the outer product of the S2-FTs of the signals. This is shown in figure 2. We were unable to find a reference for the latter version of the S2 Fourier theorem\"\n\n The result is presented at least in:\n - Makadia et al. (2007), eq (21),\n - Kostelec and Rockmore (2008), eq (6.6),\n - Gutman et al. (2008), eq (9),\n - Rafaely (2015), eq (1.88).\n\n All mentioned references define \"spherical correlation\" as what you define as \"spherical convolution\". I believe it makes more sense to call it correlation, since it can be seen as a measure of similarity between two functions (given two functions on S2 and transformations on SO(3), the correlation function measures the similarity as a function of the transformation).\n\n References:\n Makadia, A., Geyer, C., & Daniilidis, K., Correspondence-free structure from motion, International Journal of Computer Vision, 75(3), 311–327 (2007).\n Kostelec, P. J., & Rockmore, D. N., Ffts on the rotation group, Journal of Fourier analysis and applications, 14(2), 145–179 (2008).\n Gutman, B., Wang, Y., Chan, T., Thompson, P. M., & Toga, A. W., Shape registration with spherical cross correlation, 2nd MICCAI workshop on mathematical foundations of computational anatomy (pp. 56–67) (2008).\n Rafaely B. Fundamentals of spherical array processing. Berlin: Springer; (2015).\n" ]
[ -1, -1, 8, 7, 9, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hkbd5xZRb", "ryi-Q6_Xf", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "Bkv4qd3bG", "SJ3LYkFez", "B1gQIy9gM", "S1rz4yvGf", "iclr_2018_Hkbd5xZRb" ]
iclr_2018_S1CChZ-CZ
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
accepted-oral-papers
this submission presents a novel way in which a neural machine reader could be improved. that is, by learning to reformulate a question specifically for the downstream machine reader. all the reviewers found it positive, and so do i.
train
[ "r10KoNDgf", "HJ9W8iheM", "Hydu7nFeG", "Hk9DKzYzM", "H15NIQOfM", "SJZ0UmdfM", "BkGlU7OMz", "BkXuXQufM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions. \n\nIn general, the paper is well-written (although there are a lot of typos and grammatical errors that need to be corrected), and the main ideas are clear. It would have been useful to provide some more details and carry out additional experiments to strengthen the merit of the proposed model. \n\nEspecially, in Section 4.2, more details about the quality of paraphrasing after training with the multilingual, monolingual, and refined models would be helpful. Which evaluation metrics were used to evaluate the quality? Also, more monolingual experiments could have been conducted with state-of-the-art neural paraphrasing models on WikiQA and Quora datasets (e.g. see https://arxiv.org/pdf/1610.03098.pdf and https://arxiv.org/pdf/1709.05074.pdf). \n\nMore details with examples should be provided about the variants of AQA along with the oracle model. Especially, step-by-step examples (for all alternative models) from input (original question) to question reformulations to output (answer/candidate answers) would be useful to understand how each module/variation is having an impact towards the best possible answer/ground truth.\n\nAlthough experiments on SearchQA demonstrate good results, I think it would be also interesting to see the results on additional datasets e.g. MS MARCO (Nguyen et al., 2016), which is very similar to the SearchQA dataset, in order to confirm the generalizability of the proposed approach. \n\n---------------------------------------------\nThanks for revising the paper, I am happy to update my scores.", "This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting formulation, as promoted by some recent work, this paper does not provide compelling reasons why it's a good formulation. The lack of serious comparisons to baseline methods makes it hard to judge the value of this work.\n\nDetailed comments/questions:\n\t1. I am actually quite confused on why it's a good RL setting. For a human user, having a series of queries to search for the right answer is a natural process, but it's not natural for a computer program. For instance, each query can be viewed as different formulation of the same question and can be issued concurrently. Although formulated as an RL problem, it is not clear to me whether the search result after each episode has been used as the immediate environment feedback. As a result, the dependency between actions seems rather weak.\n\t2. I also feel that the comparisons to other baselines (not just the variation of the proposed system) are not entirely fair. For instance, the baseline BiDAF model has only one shot, namely using the original question as query. In this case, AQA should be allowed to use the same budget -- only one query. Another more realistic baseline is to follow the existing work on query formulation in the IR community. For example, 20 shorter queries generated by methods like [1] can be used to compare the queries created by AQA.\n\n[1] Kumaran & Carvalho. \"Reducing Long Queries Using Query Quality Predictors\". SIGIR-09\n\t\nPros:\n\t1. An interesting RL formulation for query reformulation\n\nCons:\n\t1. The use of RL is not properly justified\n\t2. The empirical result is not convincing that the proposed method is indeed advantageous \n\n---------------------------------------\n\nAfter reading the author response and checking the revised paper, I'm both delighted and surprised that the authors improved the submission substantially and presented stronger results. I believe the updated version has reached the bar and recommend accepting this paper. ", "This article clearly describes how they designed and actively trained 2 models for question reformulation and answer selection during question answering episodes. The reformulation component is trained using a policy gradient over a sequence-to-sequence model (original vs. reformulated questions). The model is first pre-trained using a bidirectional LSTM on multilingual pairs of sentences. A small monolingual bitext corpus is the uses to improve the quality of the results. A CNN binary classifier performs answer selection. \n\nThe paper is well written and the approach is well described. I was first skeptical by the use of this technique but as the authors mention in their paper, it seems that the sequence-to-sequence translation model generate sequence of words that enables the black box environment to find meaningful answers, even though the questions are not semantically correct. Experimental clearly indicates that training both selection and reformulation components with the proposed active scheme clearly improves the performance of the Q&A system. ", "We have summarized the comparison of the AQA agent in different modes versus the baselines in Table 1.", "We thank Reviewer 1 for the encouraging feedback!", "Thanks for your review and suggestions! We address each point below:\n\nOther datasets\nWe agree that it will be important to extend the empirical evaluation on new datasets. Our current experimental setup cannot be straightforwardly applied to MsMarco, unfortunately. Our environment (the BiDAF QA system) is an extractive QA system. However, MsMarco contains many answers (55%) that are not substrings of the context; even after text normalization, 36% are missing. We plan to investigate the use of generative answer models for the environment with which we could extend AQA to this data. \n\nParaphrasing quality\nRegarding stand-alone evaluation of the paraphrasing quality of our models, we ran several additional experiments inspired by the suggested work.\nWe focused on the relation between paraphrasing quality and QA quality. To tease apart the relationship between paraphrasing and reformulation for QA we evaluated 3 variants of the reformulator:\n\nBase-NMT: this is the model used to initialize RL training of the agent. Trained first on the multilingual U.N. corpus, then on the Paralex corpus.\nBase-NMT-NoParalex: is the model above trained solely on the multilingual U.N. corpus, without the Paralex monolingual corpus.\nBase-NMT+Quora: is the same as Base-NMT, additionally trained on the Quora duplicate question dataset.\n\nFollowing Prakash et al. (2016) we evaluated all models on MSCOCO, selecting one out of five captions at random from Val2014 and using the other 4 as references. We use beam search, as in the paper, to compute the top hypothesis and report uncased, moses-tokenized BLEU using John Clark's multeval. [github.com/jhclark/multeval]\nThe Base-NMT model performs at 11.4 BLEU (see Table 1 for the QA eval numbers). Base-NMT-NoParalex performs poorly at 5.0 BLEU. Limiting training to the multilingual data alone also degrades QA performance: the scores of the Top Hypothesis are at least 5 points lower in all metrics, CNN scores are 2-3 points lower for all metrics.\nBy training on additional monolingual data, the Base-NMT+Quora model BLEU score improves marginally to 11.6. End-to-end QA performance also improves marginally, the maximum delta with respect to Base-NMT under all conditions is +0.5 points, but the difference is not statistically significant. Thus, adding the Quora training does not have a significant effect. This might be due to the fact that most of the improvement is captured by training on the larger Paralex data set.\n\nImproving raw paraphrasing quality as well as reformulation fluency help AQA up to a point. However, they are only partially aligned with the main task, which is QA performance. The AQA-QR reformulator has a BLEU score of 8.6, well below both Base-NMT models trained on monolingual data. AQA-QR significantly outperforms all others in the QA task. Training the agent starting from the Base-NMT+Quora model yielded identical results as starting from Base-NMT.\n\nExamples\nWe have updated Appendix A in the paper with the answers corresponding to all queries, together with their F1 scores. We also added a few examples (Appendix B) where the agent is not able to identify the correct candidate reformulation, even if present in the candidate set. We also added an appendix (C) with example paraphrases from MSCOCO from the different models.\n\nPresentation\nWe spelled and grammar checked the manuscript.\n", "Thanks for your review, questions, and suggestions which we address below:\n\n1- RL formulation\nWe require RL (policy gradient) because (a) the reward function is non-differentiable, and (b) we are optimizing against a black box environment using only queries, i.e. no supervised query transformation data (query to query that works better for a particular QA system) is available.\nWithout RL we could not optimize these reformulations against the black-box environment to maximize expected answer quality (F1 score).\n\nRegarding the training process you are correct: in this work, the reformulations of the initial query are indeed issued concurrently, as shown in Figure 1. We note this when we introduce the agent in the first paragraph of Section 2; we say “The agent then generates a *set* of reformulations {q_i}” rather than a sequence. \n\nIn the last line of the conclusion, we comment that we plan to extend AQA to sequential reformulations which would then depend on the previous questions/answers also. \n\n2- Baseline comparisons\nWe computed an IR baseline following [Kumaran & Carvalho, 2009] as suggested. We implemented the candidate generation method (Section 4.3) of their system to generate subquery reformulations of the original query. We choose the reformulations from the term-level subsequences of length 3 to 6. We associate each reformulation with a graph, where the vertices are the terms and the edges are the mutual information between terms. We rank the reformulations by the average edge weights of the Maximum Spanning Trees of the corresponding graphs. We keep the top 20 reformulations, the same number as we keep for the AQA agent. Then, we train a CNN to score these reformulations to identify those with above-average F1, in exactly the same way we do for the AQA agent. As suggested, we then compare this method both in terms of choosing the single top hypothesis (1 shot), and ensemble prediction (choose from 20 queries).\nWe additionally compare AQA to the Base-NMT system in the same way. This is the pre-trained monolingual seq2seq model used to initialize the RL training. We evaluate the Base-NMT model's top hypothesis (1 shot) and in ensemble mode.\n\nWe find that the AQA agent outperforms all other methods both in 1-shot prediction (top hypothesis) and using CNNs to pick a hypothesis from 20. To verify that the difference in performance is statistically significant we ran a statistical test. The null hypothesis is always rejected (p<0.00001).\nAll results are summarized and discussed in the paper.\n\nPS - After reviewing the suggested paper, and related IR literature we took the opportunity to add an IR query quality metric, QueryClarity, to our qualitative analysis at the end of the paper, in the box plot. QueryClarity contributes to our conclusion. showing that the AQA agent learns to transform the initial reformulations (Base-NMT) into ones that have higher QueryClarity, in addition to having better tf-idf and worse fluency.", "We would like to thank the reviewers for their valuable comments. It took us as a few weeks to reply because we took the time to implement as much as possible of the feedback. We believe this has benefited the paper significantly. We have uploaded a new version of the pdf with the additional work and reply here to the specific comments in greater detail." ]
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "BkGlU7OMz", "Hydu7nFeG", "r10KoNDgf", "HJ9W8iheM", "iclr_2018_S1CChZ-CZ" ]
iclr_2018_rJTutzbA-
On the insufficiency of existing momentum schemes for Stochastic Optimization
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching. Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.
accepted-oral-papers
The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned that the paper was missing a discussion of some related algorithms (ASVRG and ASDCA) that were handled in discussion.
train
[ "Sy3aR8wxz", "Sk0uMIqef", "Sy2Sc4CWz", "SkEtTX6Xz", "BJqEtWdMf", "SyL2ub_fM", "rkv8dZ_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people start looking an accelerating in stochastic formulations. There is already accelerated SVRG, Jain et al 2017, or even Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737).\n\nI would somehow split the contributions into two parts:\n1) Theoretical contribution: Proposition 3 (+ proofs in appendix)\n2) Experimental comparison.\n\nI like the experimental part (it is written clearly, and all experiments are described in a lot of detail).\n\nI really like the Proposition 3 as this is the most important contribution of the paper. (Indeed, Algorithms 1 and 2 are for reference and Algorithm 3 was basically described in Jain, right?). \n\nSignificance: I think that this paper is important because it shows that the classical HB method cannot achieve acceleration in a stochastic regime.\n\nClarity: I was easy to read the paper and understand it.\n\nFew minor comments:\n1. Page 1, Paragraph 1: It is not known only for smooth problems, it is also true for simple non-smooth (see e.g. https://link.springer.com/article/10.1007/s10107-012-0629-5)\n2. In abstract : Line 6 - not completely true, there is accelerated SVRG method, i.e. the gradient is not exact there, also see Recht (https://arxiv.org/pdf/1701.03863.pdf) or Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737) for some examples where acceleration can be proved when you do not have an exact gradient.\n3. Page 2, block \"4\" missing \".\" in \"SGD We validate\"....\n4. Section 2. I think you are missing 1/2 in the definition of the function. Otherwise, you would have a constant \"2\" in the Hessian, i.e. H= 2 E[xx^T]. So please define the function as f_i(w) = 1/2 (y - <w,x_i>)^2. The same applies to Section 3.\n5. Page 6, last line, .... was downloaded from \"pre\". I know it is a link, but when printed, it looks weird. \n\n", "I wonder how the ASGD compares to other optimization schemes applicable to DL, like Entropy-SGD, which is yet another algorithm that provably improves over SGD. This question is also valid when it comes to other optimization schemes that are designed for deep learning problems. For instance, Entropy-SGD and Path-SGD should be mentioned and compared with. As a consequence, the literature analysis is insufficient. \n\nAuthors provided necessary clarifications. I am raising my score.\n\n\n\n\n", "I only got access to the paper after the review deadline; and did not have a chance to read it until now. Hence the lateness and brevity.\n\nThe paper is reasonably well written, and tackles an important problem. I did not check the mathematics. \n\nBesides the missing literature mentioned by other reviewers (all directly relevant to the current paper), the authors should also comment on the availability of accelerated methods inn the finite sum / ERM setting. There, the questions this paper is asking are resolved, and properly modified stochastic methods exist which offer acceleration over SGD (and not through minibatching). This paper does not comment on these developments. Look at accelerated SDCA (APPROX, ASDCA), accelerated SVRG (Katyusha) and so on.\n\nProvided these changes are made, I am happy to suggest acceptance.\n\n\n\n", "We group the list of changes made to the manuscript based on suggestions of reviewers:\n\nAnonReviewer 3:\n- Added a paragraph on accelerated and fast methods for finite sums and their implications in the deep learning context. (in related work)\n\nAnonReviewer 2:\n- Included reference on Acceleration for simple non-smooth problems. (in page 1)\n- Included reference on Accelerated SVRG and other suggested references. (in related work)\n- Fixed citations for pytorch/download links and fixed typos.\n\nAnonReviewer 1:\n- Added a paragraph on entropic sgd and path normalized sgd and their complimentary nature compared to this work's message (in related work section).\n\nOther changes:\n- In the related work: background about Stochastic Heavy Ball, adding references addressing reviewer feedback.\n- Removed statement on generalization/batch size. (page 2)\n- Fixed minor typos. (page 3)\n- Added comment about NAG lower bound conjecture. (page 4, below proposition 3)", "Thanks for the references, we have included them in the paper and added a paragraph in Section 6 providing detailed comparison and key differences that we summarize below: \n \nASDCA, Katyusha, accelerated SVRG: these methods are \"offline\" stochastic algorithms that is they require multiple passes over the data and require multiple rounds of full gradient computation (over the entire training data). In contrast, ASGD is a single pass algorithm and requires gradient computation only a single data point at a time step. In the context of deep learning, this is a critical difference, as computing gradient over entire training data can be extremely slow. See Frostig, Ge, Kakade, Sidford ``Competing with the ERM in a single pass\" (https://arxiv.org/pdf/1412.6606.pdf) for a more detailed discussion on online vs offline stochastic methods. \n\nMoreover, the rate of convergence of the ASDCA depend on \\sqrt{\\kappa n} while the method studied in this paper has \\sqrt{\\kappa \\tilde{kappa}} dependence where \\tilde{kappa} can be much smaller than n. \n\n\n\n\n\n\n\n\n\n", "Thanks for your comments. \n\nWe have cited Entropy SGD and Path SGD papers and discuss the differences in Section 6 (related works). However, both the methods are complementary to our method. \n\nEntropy SGD adds a local strong convexity term to the objective function to improve generalization. However, currently we do not understand convergence rates or generalization performance of the technique rigorously, even for convex problems. The paper proposes to use SGD to optimize the altered objective function and mentions that one can use SGD+momentum as well (below algorithm box on page 6). Naturally, one can use the ASGD method as well to optimize the proposed objective function in the paper. \n\nPath SGD uses a modified SGD like update to ensure invariance to the scale of the data. Here again, the main goal is orthogonal to our work and one can easily use ASGD method in the same framework. \n", "Thanks a lot for insightful comments. We have updated the paper taking into account several of your comments. We will make more updates according to your suggestions. \n\n\nPaper organization: we will try to better organize the paper to highlight the contributions. \nProposition 3's importance: yes, your assessment is spot on.\n\nMinor comment 1,2: Thanks for pointing the minor mistake, we have updated the corresponding lines. Papers such as Accelerated SVRG, Recht et al. are offline stochastic accelerated methods. The paper of Richtarik (arXiv:1706.01108) deals with solving consistent linear systems in the offline setting; (arXiv:1710.10737) is certainly relevant and we will add more detailed comparison with this line of work. \nMinor comment 3, 5: thanks for pointing out the typos. They are fixed. \nMinor comment 4: Actually, the problem is a discrete problem where one observes one hot vectors in 2-dimensions, each of the vectors can occur with probability 1/2. So this is the reason why the Hessian does not carry an added factor of 2.\n\n\n" ]
[ 7, 7, 8, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "Sy2Sc4CWz", "Sk0uMIqef", "Sy3aR8wxz" ]
iclr_2018_Hk6kPgZA-
Certifying Some Distributional Robustness with Principled Adversarial Training
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
accepted-oral-papers
This paper attracted strong praise from the reviewers, who felt that it was of high quality and originality. The broad problem that is being tackled is clearly of great importance. This paper also attracted the attention of outside experts, who were more skeptical of the claims made by the paper. The technical merits do not seem to be in question, but rather, their interpretation/application. The perception by a community as to whether an important problem has been essentially solved can affect the choices made by other reviewers when they decide what work to pursue themselves, evaluate grants, etc. It's important that claims be conservative and highlight the ways in which the present work does not fully address the broader problem of adversarial examples. Ultimately, it has been decided that the paper will be of great interest to the community. The authors have also been entrusted with the responsibility to consider the issues raised by the outside expert (and then echoed by the AC) in their final revisions. One final note: In their responses to the outside expert, the authors several times remark that the guarantees made in the paper are, in form, no different from standard learning-theoretic claims: "This criticism, however, applies to many learning-theoretic results (including those applied in deep learning)." I don't find any comfort in this statement. Learning theorists have often focused on the form of the bounds (sqrt(m) dependence and, say, independence from the # of weights) and then they resort to empirical observations of correlation to demonstrate that the value of the bound is predictive for generalization. because the bounds are often meaningless ("vacuous") when evaluated on real data sets. (There are some recent examples bucking this trend.) In a sense, learning theorists have gotten off easy. Adversarial examples, however, concern security, and so there is more at stake. The slack we might afford learning theorists is not appropriate in this new context. I would encourage the authors to clearly explain any remaining work that needs to be done to move from "good enough for learning theory" to "good enough for security". The authors promise to outline important future work / open problems for the community. I definitely encourage this.
train
[ "S1pdil8Sz", "rkn74s8BG", "HJNBMS8rf", "rJnkAlLBf", "H1g0Nx8rf", "rklzlzBVf", "HJ-1AnFlM", "HySlNfjgf", "rkx-2-y-f", "rkix5PTQf", "rJ63YwTQM", "HyFBKPp7z", "Hkzmdv67G", "rJBbuPTmz", "Hk2kQP3Qz", "BJVnpJPXM", "H1wDpaNbM" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public" ]
[ "We just received an email notification abut this comment a few minutes ago and somehow did not receive any notification of the original comment uploaded on 21 January. We will upload a response later today.", "Apologies for the (evidently) tardy response. We have now uploaded a response to the area chair's comments (see below).", "Thank you for the detailed follow-up. \n\nWe will make the point that we deal with imperceptible changes clearer in the paper. We had emphasized that our work is motivated by imperceptible adversarial perturbations from the second paragraph of the paper. We will make this point even clearer and quantify our statements on performance so that there is no confusion that we mainly consider imperceptible changes.\n\nAs we have noted in our previous response, we agree with you in that robustness to larger perturbations is an important research direction. The point we made in our original response is that infinity-norms may not be the most appropriate norm to consider in this perceptible attack setting. For example, a 1-norm-constrained adversary can change a few pixels in a very meaningful way--with infinity-norms approaching 1--which may be a more suitable model for a perceptible adversary. There are a number of concurrent works on this topic that we believe could lead to more robust learning systems. \n\nIt is still open whether distributionally robust algorithms (empirically) allow hedging against large adversarial perturbations. At this point, we believe it would be imprudent to call this class of methods “inherently restricted” to the small perturbation regime; indeed, any heuristic method (such as one based on projected gradient descent) has the same restrictions, at least in terms of rigorous guarantees. A more thorough study—on more diverse datasets, model classes and hyperparameter settings—should be conducted in order to draw any meaningful conclusions. We hope to contribute to this effort in the future but we invite others as well, since we believe this is an important question for the community to answer.\n\nOur certificate of robustness given in Theorem 3 is efficiently computable for small values of rho, or equivalently, for imperceptible attacks. Hence, this data-dependent certificate provides a upper bound on the worst-case loss so that you are guaranteed to do no worse than this number with high probability. For the achieved level of robustness (rho hat in our notation), our bounds do imply that we are robust to perturbation budgets of this size. Hence, we would argue that Theorem 3 is indeed a flavor of result that satisfies the desiderata you described.\n\nThere are limitations, and we hope that subsequent work will improve our learning guarantees with a better dependence on model size. This criticism, however, largely applies to most learning-theoretic results applied to deep learning.\n\nAs we mentioned in our introduction, we agree that recent advances in verification techniques for deep learning are a complementary and important research direction for achieving robust learning systems. Our understanding of these techniques is that they currently have prohibitive computational complexity, even on small datasets such as MNIST. Our results complement these approaches by providing a weaker statistical guarantee with computational effort more comparable to the vanilla training times.\n\nThe motivation of this paper comes from the fact that formal guarantees on arbitrary levels of robustness is NP-hard. We study the regime of small to moderate levels of robustness to provide guarantees for this regime.", "Sorry for the rush created by this likely OpenReview bug. A response today would be most appreciated!", "You have been contacted now by the Area Chair and the Program Chair and asked to respond to comments by the Area Chair. It is imperative that you respond.", "[I put my reply here as the threads below are now a bit hard to follow.]\n\nThank you for responding to my comments and making the effort to provide more data. This indeed helps me understand this work better.\n\nI agree that studying the regime of small adversarial perturbation budget epsilon is a very valid research goal. I think, however, that it is important to explicitly mention in the paper that this is the target. Especially, as the proposed methods seem to be inherently restricted to apply to only such a small epsilon regime. \n\nI am not sure though that I agree with the argument why the regime of larger values of epsilon might be less interesting. Yes, some of the larger perturbations will be clearly visible to a human, but some (e.g., the ones that correspond to a change of the background color or its pattern) will not - and we still would like to be robust to them. After all, security guarantees are about getting \"for all\", not \"for some\" guarantees. \n\nNow, regarding being explicit about the constants in the bounds, I agree that many optimization and statistical learning guarantees do not provide not provide explicit constants. However, I think the situation in the context considered here is fundamentally different. \n\nAfter all, for example, in the context of generalization bounds, we always have a meaningful way of checking if a given bound \"triggered\" for a given model and dataset by testing its performance on a validation/test set. When we talk about robustness guarantee, the whole point is to have it hold even against attacks that we are not able to produce ourselves (but the adversary might). Then, we really need a very concrete guarantee of the form \"(With high probability) the model classifies correctly 90% of the test set against perturbation budget of epsilon <= 0.1”. \n\nIn the light of this, providing a guarantee of the form \"(With high probability) the model correctly classifies 90% of the test set against perturbation budget of some positive epsilon\", which is what the proposed guarantees seem to provide, is somewhat less meaningful. (One could argue that, after all, there is always some positive epsilon for which the model is robust.)\n\nIt might be worth noting that, e.g., for MNIST, we currently are able to deliver guarantees of the former (explicit) type. For instance, there is a recent work of Kolter and Wong (https://arxiv.org/abs/1711.00851). Although they provide such guarantees via verification techniques and not by proving an explicit generalization bound.\n\nFinally, I am not sure how much bearing the formal NP-hardness of certifying the robustness has here. (I assume you are referring to the result in Appendix B.) Could you elaborate?", "This paper proposes a principled methodology to induce distributional robustness in trained neural nets with the purpose of mitigating the impact of adversarial examples. The idea is to train the model to perform well not only with respect to the unknown population distribution, but to perform well on the worst-case distribution in some ball around the population distribution. In particular, the authors adopt the Wasserstein distance to define the ambiguity sets. This allows them to use strong duality results from the literature on distributionally robust optimization and express the empirical minimax problem as a regularized ERM with a different cost. The theoretical results in the paper are supported by experiments.\n\nOverall, this is a very well-written paper that creatively combines a number of interesting ideas to address an important problem.", "This paper applies recently developed ideas in the literature of robust optimization, in particular distributionally robust optimization with Wasserstein metric, and showed that under this framework for smooth loss functions when not too much robustness is requested, then the resulting optimization problem is of the same difficulty level as the original one (where the adversarial attack is not concerned). I think the idea is intuitive and reasonable, the result is nice. Although it only holds when light robustness are imposed, but in practice, this seems to be more of the case than say large deviation/adversary exists. As adversarial training is an important topic for deep learning, I feel this work may lead to promising principled ways for adversarial training. ", "In this very good paper, the objective is to perform robust learning: to minimize not only the risk under some distribution P_0, but also against the worst case distribution in a ball around P_0.\n\nSince the min-max problem is intractable in general, what is actually studied here is a relaxation of the problem: it is possible to give a non-convex dual formulation of the problem. If the duality parameter is large enough, the functions become convex given that the initial losses are smooth. \n\nWhat follows are certifiable bounds for the risk for robust learning and stochastic optimization over a ball of distributions. Experiments show that this performs as expected, and gives a good intuition for the reasons why this occurs: separation lines are 'pushed away' from samples, and a margin seems to be increased with this procedure.", "Thank you for your interest in our paper. We appreciate your detailed feedback.\n\n1. This is a fair criticism; it seems to apply generally to most learning-theoretic guarantees on deep learning (though see the recent work of Dziugaite and Roy, https://arxiv.org/abs/1703.11008 and Bartlett, Foster, Telgarsky https://arxiv.org/pdf/1706.08498.pdf). We believe that our statistical guarantees in Theorems 3 and 4 are steps towards a principled understanding of adversarial training. Replacing our current covering number arguments with more intricate notions such as margin based-bounds (Bartlett et al. 2017)) would extend the scope of our theoretical guarantees; as Bartlett et al. provide covering number bounds, it seems likely that we could massage them into applying in Theorem 3 (Eqs. (11)-(12)). This is a meaningful future research direction.\n\n\n2. In Figure 2, we plot our certificate of robustness on two datasets (omitting the statistical error term) and observe that our data-dependent upper bound on the worst-case performance is reasonable. This roughly implies that our adversarial training procedure generalizes, allowing us to learn to defend against attacks on the test set.\n\n“In the experimental sections, good performance is achieved at test time. But it would be more convincing if the performance for training data is also shown. The current experiments don't seem to evaluate generalization of the proposed WRM. Furthermore, analysis of other classification problems (cifar10, cifar 100, imagenet) is highly desired.“\n\nThese are both great suggestions. We are currently working on experiments with subsets of Imagenet and will include them in a revision (soon we hope).\n\n3. Our adversarial training algorithm has intimate connections with other previously proposed heuristics. Our main theoretical contribution is that for small adversarial perturbations, we can show both computational and statistical guarantees for our procedure. More specifically, the computational guarantees for our algorithm are indeed based on the curvature of the L2-norm; provably efficient computation of attacks based on infinity-norms remains open.", "Thank you for your interest in our paper. We appreciate the detailed feedback and probing questions.\n\nUpon your suggestions during our meeting at NIPS, we have included a more extensive empirical evaluation of our algorithm. Most notably, we trained and tested our method—alongside other baselines, including [MMSTV17]—on large values of adversarial budgets. We further compared our algorithm trained against L2-norm Lagrangian attacks against other heuristic methods trained against infinity-norm attacks. Lastly, we proposed a (heuristic) proximal variant of our algorithm that learns to defend against infinity-norm attacks. See Appendices A.4, A.5, and E for the relevant exposition and figures.\n\n1. Empirical Evaluation on Large Adversarial Budgets\n\nOur primary motivation of this paper is to provide a theoretically principled algorithm that can defend against small adversarial perturbations. In particular, we are concerned with provable procedures against small adversarial perturbations that can fool deep nets but are imperceptible to humans. Our main finding in the original empirical experiments in Section 4 was that for such small adversarial perturbations, our principled algorithm matches or outperforms other existing heuristics. (See also point 2 below.)\n\nThe adversarial budget epsilon = .3 in the infinity-norm you suggest allows attacks that are highly visible to the human eye. For example, one can construct hand-tuned perturbations that look significantly different from the original image (see https://www.dropbox.com/sh/c6789iwhnooz5po/AABBpU_mg-FRRq7PT1LzI0GAa?dl=0). Defending against such attacks is certainly interesting, but was not our main goal. This probably warrants a longer discussion, but it is not clear to us that infinity-norm-bounded attacks are most appropriate if one allows perceptible image modifications. An L1-budgeted adversary might be able to make small changes in some part of the image, which yields a different set of attacks.\n\nIn spite of the departure of large perturbations from our nominal goal of protection against small changes, we test our algorithm on attacks with large adversarial budgets in Appendix A.4. In this case, our algorithm is a heuristic—as are other methods for large adversarial budgets—but we nevertheless match the performance of other methods (FGM, IFGM, PGM) trained against L2-norm adversaries.\n\nSince our computational guarantees are based on strong concavity w.r.t. Lp-norms for p \\in (1, 2], our robustly-fitted network defends against L2-norm attacks. Per the suggestion to compare against networks trained to defend against infinity-norm attacks—and we agree, this is an important comparison that we did not perform originally (though we should have)—we compared our method with other heuristics in Appendix A.5.1. On imperceptible L2 and infinity-norm attacks, our algorithm outperforms other heuristics trained to defend against infinity-norm attacks (Figures 11 and 12). On larger attacks, particularly infinity-norm attacks, we observe that other heuristics trained on infinity-norm attacks outperform our method (Figure 12). In this sense, the conclusions we reached from the main figures in our paper—where we considered imperceptible perturbations—are still valid: we match or outperform other heuristic methods for small perturbations.\n\n(Continued in Part II)", "2. Theoretical Guarantees\n\nThe motivation for our work is that computing the worst-case perturbation of a deep network under norm-constraints is typically intractable. As we state in the introduction, we simply give up on computing worst-case perturbations at arbitrary budget levels, instead considering small adversarial perturbations. Our theoretical guarantees are concerned with imperceptible changes; we give computational and statistical guarantees for such small (adversarial) perturbations. This is definitely a limit of the approach; given that it is NP hard to certify robustness for larger perturbations this may be challenging to get around.\n\nOur main theoretical guarantee is the certificate of robustness—a data-dependent upper bound on the worst-case performance—given in Theorem 3. This upper bound applies in general, although its efficient computation is only guaranteed for large penalty parameters \\gamma and smooth losses. Similarly, as you note, Theorems 2 and 4 only apply in such regimes. To address this, we augment our theoretical guarantees for small adversarial budgets with empirical evaluations in Section 4 and Appendix A. We empirically checked if our level of \\gamma = .385 (=.04 * C_2) is above the estimated smoothness parameter at the adversarially trained model and observed that this condition is satisfied on 98% of the training data points.\n\nOur guarantees indeed depend on the problem-dependent smoothness parameter. As with most optimization and statistical learning guarantees, this value is often unknown. This limitation applies to most learning-theoretic results, and we believe that being adaptive to such problem-dependent constants is a meaningful future research direction. With that said, it seems likely (though we have not had time to verify this) that the recent work of Bartlett et al. (https://arxiv.org/pdf/1706.08498.pdf) should apply--it provides covering number bounds our Theorem 3 (Eq. (11-12)) can use.\n\nWe hope that our theoretical guarantees are a step towards understanding the performance of these adversarial training procedures. Gaps still remain; we hope future work will close this gap.", "Thank you for bringing our attention to Roy et al. (2017). In Section 4.3, we adapted our adversarial training algorithm in the supervised learning setting to reinforcement learning; this approach shares similar motivations as Roy et al. (2017)—and more broadly, the robust MDP literature—where we also solve approximations of the worst-case Bellman equation. Compared to our Wasserstein ball, Roy et al. (2017) uses more simple and tractable worst-case regions. While they give convergence guarantees for their algorithm, the empirical performance of these different worst-case regions remains open.\n\nAnother key difference in our experiments is that we assumed access to the simulator for updating the underlying state. This allows us to explore bad regions better. Nevertheless, our adversarial state update in Eqn (20) can be replaced with an adversarial reward update for settings where the simulator cannot be accessed.", "We thank the reviewers for their time and positive feedback. We will use the comments and suggestions to improve the quality and presentation the paper. In addition to cleaning up our exposition, we added some content to make our main points more clear. We address these main revisions below.\n\nOur formulation (2) is general enough to include a number of different adversarial training scenarios. In Section 2 (and more thoroughly in Appendix D), we detail how our general theory can be modified in the supervised learning setting so that we learn to defend against adversarial perturbations to only the feature vectors (and not the labels). By suitably modifying the cost function that defines the Wasserstein distance, our formulation further encompasses other variants such as adversarial perturbations only to a fixed small region of an image.\n\nWe emphasize that our certificate of robustness given in Theorem 3 applies for any level of robustness \\rho. Our results imply that the output of our principled adversarial training procedure has worst-case performance no worse than this data-dependent certificate. Our certificate is efficiently computable, and we plot it in Figure 2 for our experiments. We see that in practice, the bound indeed gives a meaningful performance guarantee against attacks on the unseen test sets.\n\nWhile the primary focus of our paper is on providing provable defenses against imperceptible adversarial perturbations, we supplement our previous results with a more extensive empirical evaluation. In Appendix A.4, we augment our results by evaluating performance against L2-norm adversarial attacks with larger adversarial budgets (higher values of \\rho or \\epsilon). Our method also becomes a heuristic for such large values of adversarial budgets, but we nevertheless match the performance of other methods (FGM, IFGM, PGM) trained against L2-norm adversaries. In Appendix A.5.1, we further compare our method——which is trained to defend against L2-norm attacks——with other adversarial training algorithms trained against inf-norm attacks. We also propose a new (heuristic) proximal algorithm for solving our Lagrangian problem with inf-norms, and test its performance against other methods in Appendix A.5.2. In both sections, we observe that our method is competitive with other methods against imperceptible adversarial attacks, and performance starts to degrade as the attacks become visible to the human eye.\n\nAgain, we appreciate the reviewers' close reading and thoughtful comments.", "The problems are very well formulated (although only the L2 case is discussed). Identifying a concave surrogate in this mini-max problem is illuminating. The interplay between optimal transport, robust statistics, optimization and learning theory make the work a fairly thorough attempt at this difficult problem. Thanks to the authors for turning many intuitive concepts into rigorous maths. There are some potential concerns, however: \n\n1. The generalization bounds in THM 3, Cor 1, THM 4 for deep neural nets appear to be vacuous, since they scale like \\sqrt (d/n), but d > n for deep learning. This is typical, although such generalization bounds are not common in deep adversarial training. So establishing such bounds is still interesting.\n\n2. Deep neural nets generalize well in practice, despite the lack of non-vacuous generalization bounds. Does the proposed WRM adversarial training procedure also generalize despite the vacuous bounds? \n\nIn the experimental sections, good performance is achieved at test time. But it would be more convincing if the performance for training data is also shown. The current experiments don't seem to evaluate generalization of the proposed WRM. Furthermore, analysis of other classification problems (cifar10, cifar 100, imagenet) is highly desired. \n\n3. From an algorithmic viewpoint, the change isn't drastic. It appears that it controls the growth of the loss function around the L2 neighbourhood of the data manifold (thanks to the concavity identified). Since L2 geometry has good symmetry, it makes the decision surface more symmetrical between data (Fig 1). \n\nIt seems to me that this is the reason for the performance gain at test time, and the size of such \\epsilon tube is the robust certificate. So it is unclear how much success is due to the generalization bounds claimed. \n\nI think there is enough contribution in the paper, but I share the opinion of Aleksander Madry, and would like to be corrected for missing some key points.", "Developing principled approaches to training adversarially robust models is an important (and difficult) challenge. This is especially the case if such an approach is to offer provable guarantees and outperform state of the art methods. \n\nHowever, after reading this submission, I am confused by some of the key claims and find them to be inaccurate and somewhat exaggerated. In particular, I believe that the following points should be addressed and clarified:\n\n1. The authors claim their methods match or outperform existing methods. However, their evaluations seem to miss some key baselines and parameter regimes. \n \nFor example, when reporting the results for l_infty robustness - a canonical evaluation setting in most previous work - the authors plot (in Figure 2b) the robustness only for the perturbations whose size eps (as measured in the l_infty norm) is between 0 and 0.2. (Note that in Figure 2b the x-axis is scaled as roughly 2*eps.) However, in order to properly compare against prior work, one needs to be able to see the scaling for larger perturbations.\n\nIn particular, [MMSTV’17] https://arxiv.org/abs/1706.06083 gives a model that exhibits high robustness even for perturbations of l_infty size 0.3. What robustness does the approach proposed in this work offer in that regime? \n\nAs I describe below, my main worry is that the theorems in this work only apply for very small perturbations (and, in fact, this seems to be an inherent limitation of the whole approach). Hence, it would be good to see if this is true in practice as well. \nIn particular, Figure 2b suggests that this method will indeed not work for larger perturbations. I thus wonder in what sense the presented results outperform/match previous work?\n\nAfter a closer look, it seems that this discrepancy occurs because the authors are reproducing the results of [MMSTV’17] using l_2 based adversarial training. [MMSTV’17] uses l_infity based training and achieves much better results than those reported in this submission. This artificially handicaps the baseline from [MMSTV’17]. That is, there is a significantly better baseline that is not reflected in Figure 2b. I am not sure why the authors decided to do that.\n\n2. It is hard to properly interpret what actual provable guarantees the proposed techniques offer. More concretely, what is the amount of perturbation that models trained using these techniques are provably robust to? \n\nBased on the presented theorems, it is unclear why they should yield any non-vacuous generalization bounds. \n\nIn particular, as far as I can understand, there might be no uniform bound on the amount of perturbation that the trained model will be robust to. This seems to be so as the provided guarantees (see Theorem 4) might give different perturbation resistance for different regions of the underlying distribution. In fact, it could be that for a significant fraction of points we have a (provable) robustness guarantee only for vanishingly small perturbations. \n\nMore precisely, note that the proposed approach uses adversarial training that is based on a Lagrangian formulation of finding the worst case perturbation, as opposed to casting this primitive as optimization over an explicitly defined constraint set. These two views are equivalent as long as one has full flexibility in setting the Lagrangian penalization parameter gamma. In particular, for some instances, one needs to set gamma to be *small enough*, i.e., sufficiently small so as it does not exclude norm-eps vectors from the set of considered perturbations. (Here, eps denotes the desired robustness measured in a specific norm such as l_infty, i.e., the prediction of our model should not change under perturbations of magnitude up to eps.)\n\nHowever, the key point of the proposed approach is to ensure that gamma is always set to be *large enough* so as the optimized function (i.e., the loss + the Lagrangian penalization) becomes concave (and thus provably tractable). Specifically, the authors need gamma to be large enough to counterbalance the (local) smoothness parameter of the loss function. \n\nThere seems to be no global (and sufficiently small) bound on this smoothness and, as a result, it is unclear what is the value of the eps-based robustness guarantee offered once gamma is set to be as large as the proposed approach needs it to be.\n\nFor the same reason (i.e., the dependence on the smoothness parameter of the loss function that is not explicitly well bounded), the provided generalization bounds - and thus the resulting robustness guarantees - might be vacuous for actual deep learning models. \n\nIs there something I am missing here? If not, what is the exact nature of the provable guarantees that are offered in the proposed work?\n", "Very interesting work! I was wondering how the robust MDP/RL setup compares to http://papers.nips.cc/paper/6897-reinforcement-learning-under-model-mismatch.pdf ? " ]
[ -1, -1, -1, -1, -1, -1, 9, 9, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1g0Nx8rf", "rJnkAlLBf", "rklzlzBVf", "S1pdil8Sz", "iclr_2018_Hk6kPgZA-", "rJBbuPTmz", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "Hk2kQP3Qz", "BJVnpJPXM", "BJVnpJPXM", "H1wDpaNbM", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-" ]
iclr_2018_HktK4BeCZ
Learning Deep Mean Field Games for Modeling Large Population Behavior
"We consider the problem of representing collective behavior of large populations and predicting the(...TRUNCATED)
accepted-oral-papers
"The reviewers are unanimous in finding the work in this paper highly novel and significant. They h(...TRUNCATED)
val
["BkGA_x3SG","ByGPUUYgz","rJLBq1DVM","S1PF1UKxG","rJBLYC--f","BycoZZimG","HyRrEDLWG","SJJDxd8Wf","r1(...TRUNCATED)
["author","official_reviewer","official_reviewer","official_reviewer","official_reviewer","author","(...TRUNCATED)
["We appreciate your suggestions for further improving the precision of our language, and we underst(...TRUNCATED)
[ -1, 8, -1, 8, 10, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
["rJLBq1DVM","iclr_2018_HktK4BeCZ","SJJDxd8Wf","iclr_2018_HktK4BeCZ","iclr_2018_HktK4BeCZ","iclr_201(...TRUNCATED)

This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, Summarizing Multiple Documents with Conversational Structure for Meta-review Generation. The original code and datasets are public on GitHub.

Please use the following code to download the dataset with the datasets library from Huggingface.

from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')

The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:

* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)

You can also download the raw data from Google Drive. The raw data comprises more information and it can be used for other analysis for peer reviews.

Downloads last month
273
Edit dataset card